Sample records for full size iter

  1. The PRIMA Test Facility: SPIDER and MITICA test-beds for ITER neutral beam injectors

    NASA Astrophysics Data System (ADS)

    Toigo, V.; Piovan, R.; Dal Bello, S.; Gaio, E.; Luchetta, A.; Pasqualotto, R.; Zaccaria, P.; Bigi, M.; Chitarin, G.; Marcuzzi, D.; Pomaro, N.; Serianni, G.; Agostinetti, P.; Agostini, M.; Antoni, V.; Aprile, D.; Baltador, C.; Barbisan, M.; Battistella, M.; Boldrin, M.; Brombin, M.; Dalla Palma, M.; De Lorenzi, A.; Delogu, R.; De Muri, M.; Fellin, F.; Ferro, A.; Fiorentin, A.; Gambetta, G.; Gnesotto, F.; Grando, L.; Jain, P.; Maistrello, A.; Manduchi, G.; Marconato, N.; Moresco, M.; Ocello, E.; Pavei, M.; Peruzzo, S.; Pilan, N.; Pimazzoni, A.; Recchia, M.; Rizzolo, A.; Rostagni, G.; Sartori, E.; Siragusa, M.; Sonato, P.; Sottocornola, A.; Spada, E.; Spagnolo, S.; Spolaore, M.; Taliercio, C.; Valente, M.; Veltri, P.; Zamengo, A.; Zaniol, B.; Zanotto, L.; Zaupa, M.; Boilson, D.; Graceffa, J.; Svensson, L.; Schunke, B.; Decamps, H.; Urbani, M.; Kushwah, M.; Chareyre, J.; Singh, M.; Bonicelli, T.; Agarici, G.; Garbuglia, A.; Masiello, A.; Paolucci, F.; Simon, M.; Bailly-Maitre, L.; Bragulat, E.; Gomez, G.; Gutierrez, D.; Mico, G.; Moreno, J.-F.; Pilard, V.; Kashiwagi, M.; Hanada, M.; Tobari, H.; Watanabe, K.; Maejima, T.; Kojima, A.; Umeda, N.; Yamanaka, H.; Chakraborty, A.; Baruah, U.; Rotti, C.; Patel, H.; Nagaraju, M. V.; Singh, N. P.; Patel, A.; Dhola, H.; Raval, B.; Fantz, U.; Heinemann, B.; Kraus, W.; Hanke, S.; Hauer, V.; Ochoa, S.; Blatchford, P.; Chuilon, B.; Xue, Y.; De Esch, H. P. L.; Hemsworth, R.; Croci, G.; Gorini, G.; Rebai, M.; Muraro, A.; Tardocchi, M.; Cavenago, M.; D'Arienzo, M.; Sandri, S.; Tonti, A.

    2017-08-01

    The ITER Neutral Beam Test Facility (NBTF), called PRIMA (Padova Research on ITER Megavolt Accelerator), is hosted in Padova, Italy and includes two experiments: MITICA, the full-scale prototype of the ITER heating neutral beam injector, and SPIDER, the full-size radio frequency negative-ions source. The NBTF realization and the exploitation of SPIDER and MITICA have been recognized as necessary to make the future operation of the ITER heating neutral beam injectors efficient and reliable, fundamental to the achievement of thermonuclear-relevant plasma parameters in ITER. This paper reports on design and R&D carried out to construct PRIMA, SPIDER and MITICA, and highlights the huge progress made in just a few years, from the signature of the agreement for the NBTF realization in 2011, up to now—when the buildings and relevant infrastructures have been completed, SPIDER is entering the integrated commissioning phase and the procurements of several MITICA components are at a well advanced stage.

  2. Particle model of full-size ITER-relevant negative ion source.

    PubMed

    Taccogna, F; Minelli, P; Ippolito, N

    2016-02-01

    This work represents the first attempt to model the full-size ITER-relevant negative ion source including the expansion, extraction, and part of the acceleration regions keeping the mesh size fine enough to resolve every single aperture. The model consists of a 2.5D particle-in-cell Monte Carlo collision representation of the plane perpendicular to the filter field lines. Magnetic filter and electron deflection field have been included and a negative ion current density of j(H(-)) = 660 A/m(2) from the plasma grid (PG) is used as parameter for the neutral conversion. The driver is not yet included and a fixed ambipolar flux is emitted from the driver exit plane. Results show the strong asymmetry along the PG driven by the electron Hall (E × B and diamagnetic) drift perpendicular to the filter field. Such asymmetry creates an important dis-homogeneity in the electron current extracted from the different apertures. A steady state is not yet reached after 15 μs.

  3. Status of the 1 MeV Accelerator Design for ITER NBI

    NASA Astrophysics Data System (ADS)

    Kuriyama, M.; Boilson, D.; Hemsworth, R.; Svensson, L.; Graceffa, J.; Schunke, B.; Decamps, H.; Tanaka, M.; Bonicelli, T.; Masiello, A.; Bigi, M.; Chitarin, G.; Luchetta, A.; Marcuzzi, D.; Pasqualotto, R.; Pomaro, N.; Serianni, G.; Sonato, P.; Toigo, V.; Zaccaria, P.; Kraus, W.; Franzen, P.; Heinemann, B.; Inoue, T.; Watanabe, K.; Kashiwagi, M.; Taniguchi, M.; Tobari, H.; De Esch, H.

    2011-09-01

    The beam source of neutral beam heating/current drive system for ITER is needed to accelerate the negative ion beam of 40A with D- at 1 MeV for 3600 sec. In order to realize the beam source, design and R&D works are being developed in many institutions under the coordination of ITER organization. The development of the key issues of the ion source including source plasma uniformity, suppression of co-extracted electron in D beam operation and also after the long beam duration time of over a few 100 sec, is progressed mainly in IPP with the facilities of BATMAN, MANITU and RADI. In the near future, ELISE, that will be tested the half size of the ITER ion source, will start the operation in 2011, and then SPIDER, which demonstrates negative ion production and extraction with the same size and same structure as the ITER ion source, will start the operation in 2014 as part of the NBTF. The development of the accelerator is progressed mainly in JAEA with the MeV test facility, and also the computer simulation of beam optics also developed in JAEA, CEA and RFX. The full ITER heating and current drive beam performance will be demonstrated in MITICA, which will start operation in 2016 as part of the NBTF.

  4. A suite of diagnostics to validate and optimize the prototype ITER neutral beam injector

    NASA Astrophysics Data System (ADS)

    Pasqualotto, R.; Agostini, M.; Barbisan, M.; Brombin, M.; Cavazzana, R.; Croci, G.; Dalla Palma, M.; Delogu, R. S.; De Muri, M.; Muraro, A.; Peruzzo, S.; Pimazzoni, A.; Pomaro, N.; Rebai, M.; Rizzolo, A.; Sartori, E.; Serianni, G.; Spagnolo, S.; Spolaore, M.; Tardocchi, M.; Zaniol, B.; Zaupa, M.

    2017-10-01

    The ITER project requires additional heating provided by two neutral beam injectors using 40 A negative deuterium ions accelerated at 1 MV. As the beam requirements have never been experimentally met, a test facility is under construction at Consorzio RFX, which hosts two experiments: SPIDER, full-size 100 kV ion source prototype, and MITICA, 1 MeV full-size ITER injector prototype. Since diagnostics in ITER injectors will be mainly limited to thermocouples, due to neutron and gamma radiation and to limited access, it is crucial to thoroughly investigate and characterize in more accessible experiments the key parameters of source plasma and beam, using several complementary diagnostics assisted by modelling. In SPIDER and MITICA the ion source parameters will be measured by optical emission spectroscopy, electrostatic probes, cavity ring down spectroscopy for H^- density and laser absorption spectroscopy for cesium density. Measurements over multiple lines-of-sight will provide the spatial distribution of the parameters over the source extension. The beam profile uniformity and its divergence are studied with beam emission spectroscopy, complemented by visible tomography and neutron imaging, which are novel techniques, while an instrumented calorimeter based on custom unidirectional carbon fiber composite tiles observed by infrared cameras will measure the beam footprint on short pulses with the highest spatial resolution. All heated components will be monitored with thermocouples: as these will likely be the only measurements available in ITER injectors, their capabilities will be investigated by comparison with other techniques. SPIDER and MITICA diagnostics are described in the present paper with a focus on their rationale, key solutions and most original and effective implementations.

  5. Indian Test Facility (INTF) and its updates

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, M.; Chakraborty, A.; Rotti, C.; Joshi, J.; Patel, H.; Yadav, A.; Shah, S.; Tyagi, H.; Parmar, D.; Sudhir, Dass; Gahlaut, A.; Bansal, G.; Soni, J.; Pandya, K.; Pandey, R.; Yadav, R.; Nagaraju, M. V.; Mahesh, V.; Pillai, S.; Sharma, D.; Singh, D.; Bhuyan, M.; Mistry, H.; Parmar, K.; Patel, M.; Patel, K.; Prajapati, B.; Shishangiya, H.; Vishnudev, M.; Bhagora, J.

    2017-04-01

    To characterize ITER Diagnostic Neutral Beam (DNB) system with full specification and to support IPR’s negative ion beam based neutral beam injector (NBI) system development program, a R&D facility, named INTF is under commissioning phase. Implementation of a successful DNB at ITER requires several challenges need to be overcome. These issues are related to the negative ion production, its neutralization and corresponding neutral beam transport over the path lengths of ∼ 20.67 m to reach ITER plasma. DNB is a procurement package for INDIA, as an in-kind contribution to ITER. Since ITER is considered as a nuclear facility, minimum diagnostic systems, linked with safe operation of the machine are planned to be incorporated in it and so there is difficulty to characterize DNB after onsite commissioning. Therefore, the delivery of DNB to ITER will be benefited if DNB is operated and characterized prior to onsite commissioning. INTF has been envisaged to be operational with the large size ion source activities in the similar timeline, as with the SPIDER (RFX, Padova) facility. This paper describes some of the development updates of the facility.

  6. WE-EF-207-07: Dual Energy CT with One Full Scan and a Second Sparse-View Scan Using Structure Preserving Iterative Reconstruction (SPIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less

  7. Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER

    NASA Astrophysics Data System (ADS)

    Schunke, B.; Bora, D.; Hemsworth, R.; Tanga, A.

    2009-03-01

    The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D- and capable of delivering 16.5 MW of D0 to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option [1]. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H- to 100 keV will inject ≈15 A equivalent of H0 for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion source as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D- and H- current densities as well as long-pulse operation [2, 3]. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R&D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start-up imposed specific requirements for operating the HNB's at full beam power. It has been decided to address the shinethrough issue by installing wall armour protection, which increases the operational space in all scenarios. Other NB related issues identified by the Design Review process will be discussed and the possible changes to the ITER baseline indicated.

  8. Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunke, B.; Bora, D.; Hemsworth, R.

    2009-03-12

    The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D{sup -} and capable of delivering 16.5 MW of D{sup 0} to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H{sup -} to 100 keV will inject {approx_equal}15 A equivalent of H{sup 0} for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion sourcemore » as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D{sup -} and H{sup -} current densities as well as long-pulse operation. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R and D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start-up imposed specific requirements for operating the HNB's at full beam power. It has been decided to address the shinethrough issue by installing wall armour protection, which increases the operational space in all scenarios. Other NB related issues identified by the Design Review process will be discussed and the possible changes to the ITER baseline indicated.« less

  9. The ITER Neutral Beam Test Facility towards SPIDER operation

    NASA Astrophysics Data System (ADS)

    Toigo, V.; Dal Bello, S.; Gaio, E.; Luchetta, A.; Pasqualotto, R.; Zaccaria, P.; Bigi, M.; Chitarin, G.; Marcuzzi, D.; Pomaro, N.; Serianni, G.; Agostinetti, P.; Agostini, M.; Antoni, V.; Aprile, D.; Baltador, C.; Barbisan, M.; Battistella, M.; Boldrin, M.; Brombin, M.; Dalla Palma, M.; De Lorenzi, A.; Delogu, R.; De Muri, M.; Fellin, F.; Ferro, A.; Gambetta, G.; Grando, L.; Jain, P.; Maistrello, A.; Manduchi, G.; Marconato, N.; Pavei, M.; Peruzzo, S.; Pilan, N.; Pimazzoni, A.; Piovan, R.; Recchia, M.; Rizzolo, A.; Sartori, E.; Siragusa, M.; Spada, E.; Spagnolo, S.; Spolaore, M.; Taliercio, C.; Valente, M.; Veltri, P.; Zamengo, A.; Zaniol, B.; Zanotto, L.; Zaupa, M.; Boilson, D.; Graceffa, J.; Svensson, L.; Schunke, B.; Decamps, H.; Urbani, M.; Kushwah, M.; Chareyre, J.; Singh, M.; Bonicelli, T.; Agarici, G.; Garbuglia, A.; Masiello, A.; Paolucci, F.; Simon, M.; Bailly-Maitre, L.; Bragulat, E.; Gomez, G.; Gutierrez, D.; Mico, G.; Moreno, J.-F.; Pilard, V.; Chakraborty, A.; Baruah, U.; Rotti, C.; Patel, H.; Nagaraju, M. V.; Singh, N. P.; Patel, A.; Dhola, H.; Raval, B.; Fantz, U.; Fröschle, M.; Heinemann, B.; Kraus, W.; Nocentini, R.; Riedl, R.; Schiesko, L.; Wimmer, C.; Wünderlich, D.; Cavenago, M.; Croci, G.; Gorini, G.; Rebai, M.; Muraro, A.; Tardocchi, M.; Hemsworth, R.

    2017-08-01

    SPIDER is one of two projects of the ITER Neutral Beam Test Facility under construction in Padova, Italy, at the Consorzio RFX premises. It will have a 100 keV beam source with a full-size prototype of the radiofrequency ion source for the ITER neutral beam injector (NBI) and also, similar to the ITER diagnostic neutral beam, it is designed to operate with a pulse length of up to 3600 s, featuring an ITER-like magnetic filter field configuration (for high extraction of negative ions) and caesium oven (for high production of negative ions) layout as well as a wide set of diagnostics. These features will allow a reproduction of the ion source operation in ITER, which cannot be done in any other existing test facility. SPIDER realization is well advanced and the first operation is expected at the beginning of 2018, with the mission of achieving the ITER heating and diagnostic NBI ion source requirements and of improving its performance in terms of reliability and availability. This paper mainly focuses on the preparation of the first SPIDER operations—integration and testing of SPIDER components, completion and implementation of diagnostics and control and formulation of operation and research plan, based on a staged strategy.

  10. Physics and engineering design of the accelerator and electron dump for SPIDER

    NASA Astrophysics Data System (ADS)

    Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Marconato, N.; Marcuzzi, D.; Pilan, N.; Serianni, G.; Sonato, P.; Veltri, P.; Zaccaria, P.

    2011-06-01

    The ITER Neutral Beam Test Facility (PRIMA) is planned to be built at Consorzio RFX (Padova, Italy). PRIMA includes two experimental devices: a full size ion source with low voltage extraction called SPIDER and a full size neutral beam injector at full beam power called MITICA. SPIDER is the first experimental device to be built and operated, aiming at testing the extraction of a negative ion beam (made of H- and in a later stage D- ions) from an ITER size ion source. The main requirements of this experiment are a H-/D- extracted current density larger than 355/285 A m-2, an energy of 100 keV and a pulse duration of up to 3600 s. Several analytical and numerical codes have been used for the design optimization process, some of which are commercial codes, while some others were developed ad hoc. The codes are used to simulate the electrical fields (SLACCAD, BYPO, OPERA), the magnetic fields (OPERA, ANSYS, COMSOL, PERMAG), the beam aiming (OPERA, IRES), the pressure inside the accelerator (CONDUCT, STRIP), the stripping reactions and transmitted/dumped power (EAMCC), the operating temperature, stress and deformations (ALIGN, ANSYS) and the heat loads on the electron dump (ED) (EDAC, BACKSCAT). An integrated approach, taking into consideration at the same time physics and engineering aspects, has been adopted all along the design process. Particular care has been taken in investigating the many interactions between physics and engineering aspects of the experiment. According to the 'robust design' philosophy, a comprehensive set of sensitivity analyses was performed, in order to investigate the influence of the design choices on the most relevant operating parameters. The design of the SPIDER accelerator, here described, has been developed in order to satisfy with reasonable margin all the requirements given by ITER, from the physics and engineering points of view. In particular, a new approach to the compensation of unwanted beam deflections inside the accelerator and a new concept for the ED have been introduced.

  11. Radiation dose reduction in abdominal computed tomography during the late hepatic arterial phase using a model-based iterative reconstruction algorithm: how low can we go?

    PubMed

    Husarik, Daniela B; Marin, Daniele; Samei, Ehsan; Richard, Samuel; Chen, Baiyu; Jaffe, Tracy A; Bashir, Mustafa R; Nelson, Rendon C

    2012-08-01

    The aim of this study was to compare the image quality of abdominal computed tomography scans in an anthropomorphic phantom acquired at different radiation dose levels where each raw data set is reconstructed with both a standard convolution filtered back projection (FBP) and a full model-based iterative reconstruction (MBIR) algorithm. An anthropomorphic phantom in 3 sizes was used with a custom-built liver insert simulating late hepatic arterial enhancement and containing hypervascular liver lesions of various sizes. Imaging was performed on a 64-section multidetector-row computed tomography scanner (Discovery CT750 HD; GE Healthcare, Waukesha, WI) at 3 different tube voltages for each patient size and 5 incrementally decreasing tube current-time products for each tube voltage. Quantitative analysis consisted of contrast-to-noise ratio calculations and image noise assessment. Qualitative image analysis was performed by 3 independent radiologists rating subjective image quality and lesion conspicuity. Contrast-to-noise ratio was significantly higher and mean image noise was significantly lower on MBIR images than on FBP images in all patient sizes, at all tube voltage settings, and all radiation dose levels (P < 0.05). Overall image quality and lesion conspicuity were rated higher for MBIR images compared with FBP images at all radiation dose levels. Image quality and lesion conspicuity on 25% to 50% dose MBIR images were rated equal to full-dose FBP images. This phantom study suggests that depending on patient size, clinically acceptable image quality of the liver in the late hepatic arterial phase can be achieved with MBIR at approximately 50% lower radiation dose compared with FBP.

  12. Kinetic turbulence simulations at extreme scale on leadership-class systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bei; Ethier, Stephane; Tang, William

    2013-01-01

    Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCFmore » and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).« less

  13. New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program

    NASA Technical Reports Server (NTRS)

    Strain, D.; Levy, R.

    1986-01-01

    The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.

  14. Definition of acceptance criteria for the ITER divertor plasma-facing components through systematic experimental analysis

    NASA Astrophysics Data System (ADS)

    Escourbiac, F.; Richou, M.; Guigon, R.; Constans, S.; Durocher, A.; Merola, M.; Schlosser, J.; Riccardi, B.; Grosman, A.

    2009-12-01

    Experience has shown that a critical part of the high-heat flux (HHF) plasma-facing component (PFC) is the armour to heat sink bond. An experimental study was performed in order to define acceptance criteria with regards to thermal hydraulics and fatigue performance of the International Thermonuclear Experimental Reactor (ITER) divertor PFCs. This study, which includes the manufacturing of samples with calibrated artificial defects relevant to the divertor design, is reported in this paper. In particular, it was concluded that defects detectable with non-destructive examination (NDE) techniques appeared to be acceptable during HHF experiments relevant to heat fluxes expected in the ITER divertor. On the basis of these results, a set of acceptance criteria was proposed and applied to the European vertical target medium-size qualification prototype: 98% of the inspected carbon fibre composite (CFC) monoblocks and 100% of tungsten (W) monoblock and flat tiles elements (i.e. 80% of the full units) were declared acceptable.

  15. Standard and reduced radiation dose liver CT images: adaptive statistical iterative reconstruction versus model-based iterative reconstruction-comparison of findings and image quality.

    PubMed

    Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M

    2014-12-01

    To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.

  16. Hybrid propulsion technology program: Phase 1. Volume 3: Thiokol Corporation Space Operations

    NASA Technical Reports Server (NTRS)

    Schuler, A. L.; Wiley, D. R.

    1989-01-01

    Three candidate hybrid propulsion (HP) concepts were identified, optimized, evaluated, and refined through an iterative process that continually forced improvement to the systems with respect to safety, reliability, cost, and performance criteria. A full scale booster meeting Advanced Solid Rocket Motor (ASRM) thrust-time constraints and a booster application for 1/4 ASRM thrust were evaluated. Trade studies and analyses were performed for each of the motor elements related to SRM technology. Based on trade study results, the optimum HP concept for both full and quarter sized systems was defined. The three candidate hybrid concepts evaluated are illustrated.

  17. Dual energy CT with one full scan and a second sparse-view scan using structure preserving iterative reconstruction (SPIR)

    NASA Astrophysics Data System (ADS)

    Wang, Tonghe; Zhu, Lei

    2016-09-01

    Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an average error of less than 1%.

  18. Modern Workflow Full Waveform Inversion Applied to North America and the Northern Atlantic

    NASA Astrophysics Data System (ADS)

    Krischer, Lion; Fichtner, Andreas; Igel, Heiner

    2015-04-01

    We present the current state of a new seismic tomography model obtained using full waveform inversion of the crustal and upper mantle structure beneath North America and the Northern Atlantic, including the westernmost part of Europe. Parts of the eastern portion of the initial model consists of previous models by Fichtner et al. (2013) and Rickers et al. (2013). The final results of this study will contribute to the 'Comprehensive Earth Model' being developed by the Computational Seismology group at ETH Zurich. Significant challenges include the size of the domain, the uneven event and station coverage, and the strong east-west alignment of seismic ray paths across the North Atlantic. We use as much data as feasible, resulting in several thousand recordings per event depending on the receivers deployed at the earthquakes' origin times. To manage such projects in a reproducible and collaborative manner, we, as tomographers, should abandon ad-hoc scripts and one-time programs, and adopt sustainable and reusable solutions. Therefore we developed the LArge-scale Seismic Inversion Framework (LASIF - http://lasif.net), an open-source toolbox for managing seismic data in the context of non-linear iterative inversions that greatly reduces the time to research. Information on the applied processing, modelling, iterative model updating, what happened during each iteration, and so on are systematically archived. This results in a provenance record of the final model which in the end significantly enhances the reproducibility of iterative inversions. Additionally, tools for automated data download across different data centers, window selection, misfit measurements, parallel data processing, and input file generation for various forward solvers are provided.

  19. Spectral Prior Image Constrained Compressed Sensing (Spectral PICCS) for Photon-Counting Computed Tomography

    PubMed Central

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-01-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878

  20. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-09-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.

  1. Function-Space-Based Solution Scheme for the Size-Modified Poisson-Boltzmann Equation in Full-Potential DFT.

    PubMed

    Ringe, Stefan; Oberhofer, Harald; Hille, Christoph; Matera, Sebastian; Reuter, Karsten

    2016-08-09

    The size-modified Poisson-Boltzmann (MPB) equation is an efficient implicit solvation model which also captures electrolytic solvent effects. It combines an account of the dielectric solvent response with a mean-field description of solvated finite-sized ions. We present a general solution scheme for the MPB equation based on a fast function-space-oriented Newton method and a Green's function preconditioned iterative linear solver. In contrast to popular multigrid solvers, this approach allows us to fully exploit specialized integration grids and optimized integration schemes. We describe a corresponding numerically efficient implementation for the full-potential density-functional theory (DFT) code FHI-aims. We show that together with an additional Stern layer correction the DFT+MPB approach can describe the mean activity coefficient of a KCl aqueous solution over a wide range of concentrations. The high sensitivity of the calculated activity coefficient on the employed ionic parameters thereby suggests to use extensively tabulated experimental activity coefficients of salt solutions for a systematic parametrization protocol.

  2. EC power management and NTM control in ITER

    NASA Astrophysics Data System (ADS)

    Poli, Francesca; Fredrickson, E.; Henderson, M.; Bertelli, N.; Farina, D.; Figini, L.; Nowak, S.; Poli, E.; Sauter, O.

    2016-10-01

    The suppression of Neoclassical Tearing Modes (NTMs) is an essential requirement for the achievement of the demonstration baseline in ITER. The Electron Cyclotron upper launcher is specifically designed to provide highly localized heating and current drive for NTM stabilization. In order to assess the power management for shared applications, we have performed time-dependent simulations for ITER scenarios covering operation from half to full field. The free-boundary TRANSP simulations evolve the magnetic equilibrium and the pressure profiles in response to the heating and current drive sources and are interfaced with a GRE for the evolution of size and frequency of the magnetic islands. Combined with a feedback control of the EC power and the steering angle, these simulations are used to model the plasma response to NTM control, accounting for the misalignment of the EC deposition with the resonant surfaces, uncertainties in the magnetic equilibrium reconstruction and in the magnetic island detection threshold. Simulations indicate that the threshold for detection of the island should not exceed 2-3cm, that pre-emptive control is a preferable option, and that for safe operation the power needed for NTM control should be reserved, rather than shared with other applications. Work supported by ITER under IO/RFQ/13/9550/JTR and by DOE under DE-AC02-09CH11466.

  3. A Novel Pairwise Comparison-Based Method to Determine Radiation Dose Reduction Potentials of Iterative Reconstruction Algorithms, Exemplified Through Circle of Willis Computed Tomography Angiography.

    PubMed

    Ellmann, Stephan; Kammerer, Ferdinand; Brand, Michael; Allmendinger, Thomas; May, Matthias S; Uder, Michael; Lell, Michael M; Kramer, Manuel

    2016-05-01

    The aim of this study was to determine the dose reduction potential of iterative reconstruction (IR) algorithms in computed tomography angiography (CTA) of the circle of Willis using a novel method of evaluating the quality of radiation dose-reduced images. This study relied on ReconCT, a proprietary reconstruction software that allows simulating CT scans acquired with reduced radiation dose based on the raw data of true scans. To evaluate the performance of ReconCT in this regard, a phantom study was performed to compare the image noise of true and simulated scans within simulated vessels of a head phantom. That followed, 10 patients scheduled for CTA of the circle of Willis were scanned according to our institute's standard protocol (100 kV, 145 reference mAs). Subsequently, CTA images of these patients were reconstructed as either a full-dose weighted filtered back projection or with radiation dose reductions down to 10% of the full-dose level and Sinogram-Affirmed Iterative Reconstruction (SAFIRE) with either strength 3 or 5. Images were marked with arrows pointing on vessels of different sizes, and image pairs were presented to observers. Five readers assessed image quality with 2-alternative forced choice comparisons. In the phantom study, no significant differences were observed between the noise levels of simulated and true scans in filtered back projection, SAFIRE 3, and SAFIRE 5 reconstructions.The dose reduction potential for patient scans showed a strong dependence on IR strength as well as on the size of the vessel of interest. Thus, the potential radiation dose reductions ranged from 84.4% for the evaluation of great vessels reconstructed with SAFIRE 5 to 40.9% for the evaluation of small vessels reconstructed with SAFIRE 3. This study provides a novel image quality evaluation method based on 2-alternative forced choice comparisons. In CTA of the circle of Willis, higher IR strengths and greater vessel sizes allowed higher degrees of radiation dose reduction.

  4. iCI: Iterative CI toward full CI.

    PubMed

    Liu, Wenjian; Hoffmann, Mark R

    2016-03-08

    It is shown both theoretically and numerically that the minimal multireference configuration interaction (CI) approach [Liu, W.; Hoffmann, M. R. Theor. Chem. Acc. 2014, 133, 1481] converges quickly and monotonically from above to full CI by updating the primary, external, and secondary states that describe the respective static, dynamic, and again static components of correlation iteratively, even when starting with a rather poor description of a strongly correlated system. In short, the iterative CI (iCI) is a very effective means toward highly correlated wave functions and, ultimately, full CI.

  5. IC(B,T,STRAIN) Characterisation of a Nb3Sn Internal Tin Strand with Enhanced Specification for Use in Fusion Conductors

    NASA Astrophysics Data System (ADS)

    Pasztor, G.; Bruzzone, P.

    2004-06-01

    The dc performance of a recently produced internal tin route Nb3Sn strand with enhanced specification is studied extensively and compared with predecessor wires manufactured by the suppliers for the ITER Model Coils in 1996. The wire has been selected for use in a full size, developmental cable-in-conduit conductor sample, which is being tested in the SULTAN Test Facility. The critical current, Ic, and the index of the current/voltage characteristic, n, are measured over a broad range of field and temperature, using ITER standard sample holders, made of TiAlV grooved cylinders. The behavior of Ic versus applied tensile strain is also investigated at 4.2 K and 12 T, on straight specimens. Scaling law parameters are drawn from the fit of the experimental results. The implications of the test results to the design of the fusion conductors are discussed.

  6. Pair 2-electron reduced density matrix theory using localized orbitals

    NASA Astrophysics Data System (ADS)

    Head-Marsden, Kade; Mazziotti, David A.

    2017-08-01

    Full configuration interaction (FCI) restricted to a pairing space yields size-extensive correlation energies but its cost scales exponentially with molecular size. Restricting the variational two-electron reduced-density-matrix (2-RDM) method to represent the same pairing space yields an accurate lower bound to the pair FCI energy at a mean-field-like computational scaling of O (r3) where r is the number of orbitals. In this paper, we show that localized molecular orbitals can be employed to generate an efficient, approximately size-extensive pair 2-RDM method. The use of localized orbitals eliminates the substantial cost of optimizing iteratively the orbitals defining the pairing space without compromising accuracy. In contrast to the localized orbitals, the use of canonical Hartree-Fock molecular orbitals is shown to be both inaccurate and non-size-extensive. The pair 2-RDM has the flexibility to describe the spectra of one-electron RDM occupation numbers from all quantum states that are invariant to time-reversal symmetry. Applications are made to hydrogen chains and their dissociation, n-acene from naphthalene through octacene, and cadmium telluride 2-, 3-, and 4-unit polymers. For the hydrogen chains, the pair 2-RDM method recovers the majority of the energy obtained from similar calculations that iteratively optimize the orbitals. The localized-orbital pair 2-RDM method with its mean-field-like computational scaling and its ability to describe multi-reference correlation has important applications to a range of strongly correlated phenomena in chemistry and physics.

  7. Iterative combining rules for the van der Waals potentials of mixed rare gas systems

    NASA Astrophysics Data System (ADS)

    Wei, L. M.; Li, P.; Tang, K. T.

    2017-05-01

    An iterative procedure is introduced to make the results of some simple combining rules compatible with the Tang-Toennies potential model. The method is used to calculate the well locations Re and the well depths De of the van der Waals potentials of the mixed rare gas systems from the corresponding values of the homo-nuclear dimers. When the ;sizes; of the two interacting atoms are very different, several rounds of iteration are required for the results to converge. The converged results can be substantially different from the starting values obtained from the combining rules. However, if the sizes of the interacting atoms are close, only one or even no iteration is necessary for the results to converge. In either case, the converged results are the accurate descriptions of the interaction potentials of the hetero-nuclear dimers.

  8. SULTAN measurement and qualification: ITER-US-LLNL-NMARTOVETSKY- 092008

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martovetsky, N N

    2006-09-21

    Measuring the characteristics of full scale ITER CICC at SULTAN is the critical qualification test. If volt-ampere characteristic (VAC) or volt-temperature characteristic (VTC) are distorted, the criterion of 10 uV/m may not be a valid criterion to judge the conductor performance. Only measurements with a clear absence or low signals from the current distribution should be considered as quantitatively representative, although in some obvious circumstances one can judge if a conductor will meet or fail ITER requirements. SULTAN full scale ITER CICC testing should be done with all measures taken to ensure uniform current redistribution. A full removal of Crmore » plating in the joint area and complete solder filling of the joints (with provision of the central channel for helium flow) should be mandatory for DC qualification samples for ITER. Also, T and I should be increased slowly that an equilibrium could be established for accurate measurement of Tcs, Ic and N. It is also desirable to go up in down in current and/or temperature (within stable range) to make sure that the equilibrium is reached.« less

  9. Design and optimization of Artificial Neural Networks for the modelling of superconducting magnets operation in tokamak fusion reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Froio, A.; Bonifetto, R.; Carli, S.

    In superconducting tokamaks, the cryoplant provides the helium needed to cool different clients, among which by far the most important one is the superconducting magnet system. The evaluation of the transient heat load from the magnets to the cryoplant is fundamental for the design of the latter and the assessment of suitable strategies to smooth the heat load pulses, induced by the intrinsically pulsed plasma scenarios characteristic of today's tokamaks, is crucial for both suitable sizing and stable operation of the cryoplant. For that evaluation, accurate but expensive system-level models, as implemented in e.g. the validated state-of-the-art 4C code, weremore » developed in the past, including both the magnets and the respective external cryogenic cooling circuits. Here we show how these models can be successfully substituted with cheaper ones, where the magnets are described by suitably trained Artificial Neural Networks (ANNs) for the evaluation of the heat load to the cryoplant. First, two simplified thermal-hydraulic models for an ITER Toroidal Field (TF) magnet and for the ITER Central Solenoid (CS) are developed, based on ANNs, and a detailed analysis of the chosen networks' topology and parameters is presented and discussed. The ANNs are then inserted into the 4C model of the ITER TF and CS cooling circuits, which also includes active controls to achieve a smoothing of the variation of the heat load to the cryoplant. The training of the ANNs is achieved using the results of full 4C simulations (including detailed models of the magnets) for conventional sigmoid-like waveforms of the drivers and the predictive capabilities of the ANN-based models in the case of actual ITER operating scenarios are demonstrated by comparison with the results of full 4C runs, both with and without active smoothing, in terms of both accuracy and computational time. Exploiting the low computational effort requested by the ANN-based models, a demonstrative optimization study has been finally carried out, with the aim of choosing among different smoothing strategies for the standard ITER plasma operation.« less

  10. Application of linear multifrequency-grey acceleration to preconditioned Krylov iterations for thermal radiation transport

    DOE PAGES

    Till, Andrew T.; Warsa, James S.; Morel, Jim E.

    2018-06-15

    The thermal radiative transfer (TRT) equations comprise a radiation equation coupled to the material internal energy equation. Linearization of these equations produces effective, thermally-redistributed scattering through absorption-reemission. In this paper, we investigate the effectiveness and efficiency of Linear-Multi-Frequency-Grey (LMFG) acceleration that has been reformulated for use as a preconditioner to Krylov iterative solution methods. We introduce two general frameworks, the scalar flux formulation (SFF) and the absorption rate formulation (ARF), and investigate their iterative properties in the absence and presence of true scattering. SFF has a group-dependent state size but may be formulated without inner iterations in the presence ofmore » scattering, while ARF has a group-independent state size but requires inner iterations when scattering is present. We compare and evaluate the computational cost and efficiency of LMFG applied to these two formulations using a direct solver for the preconditioners. Finally, this work is novel because the use of LMFG for the radiation transport equation, in conjunction with Krylov methods, involves special considerations not required for radiation diffusion.« less

  11. ITER activities and fusion technology

    NASA Astrophysics Data System (ADS)

    Seki, M.

    2007-10-01

    At the 21st IAEA Fusion Energy Conference, 68 and 67 papers were presented in the categories of ITER activities and fusion technology, respectively. ITER performance prediction, results of technology R&D and the construction preparation provide good confidence in ITER realization. The superconducting tokamak EAST achieved the first plasma just before the conference. The construction of other new experimental machines has also shown steady progress. Future reactor studies stress the importance of down sizing and a steady-state approach. Reactor technology in the field of blanket including the ITER TBM programme and materials for the demonstration power plant showed sound progress in both R&D and design activities.

  12. Fast in-memory elastic full-waveform inversion using consumer-grade GPUs

    NASA Astrophysics Data System (ADS)

    Sivertsen Bergslid, Tore; Birger Raknes, Espen; Arntsen, Børge

    2017-04-01

    Full-waveform inversion (FWI) is a technique to estimate subsurface properties by using the recorded waveform produced by a seismic source and applying inverse theory. This is done through an iterative optimization procedure, where each iteration requires solving the wave equation many times, then trying to minimize the difference between the modeled and the measured seismic data. Having to model many of these seismic sources per iteration means that this is a highly computationally demanding procedure, which usually involves writing a lot of data to disk. We have written code that does forward modeling and inversion entirely in memory. A typical HPC cluster has many more CPUs than GPUs. Since FWI involves modeling many seismic sources per iteration, the obvious approach is to parallelize the code on a source-by-source basis, where each core of the CPU performs one modeling, and do all modelings simultaneously. With this approach, the GPU is already at a major disadvantage in pure numbers. Fortunately, GPUs can more than make up for this hardware disadvantage by performing each modeling much faster than a CPU. Another benefit of parallelizing each individual modeling is that it lets each modeling use a lot more RAM. If one node has 128 GB of RAM and 20 CPU cores, each modeling can use only 6.4 GB RAM if one is running the node at full capacity with source-by-source parallelization on the CPU. A parallelized per-source code using GPUs can use 64 GB RAM per modeling. Whenever a modeling uses more RAM than is available and has to start using regular disk space the runtime increases dramatically, due to slow file I/O. The extremely high computational speed of the GPUs combined with the large amount of RAM available for each modeling lets us do high frequency FWI for fairly large models very quickly. For a single modeling, our GPU code outperforms the single-threaded CPU-code by a factor of about 75. Successful inversions have been run on data with frequencies up to 40 Hz for a model of 2001 by 600 grid points with 5 m grid spacing and 5000 time steps, in less than 2.5 minutes per source. In practice, using 15 nodes (30 GPUs) to model 101 sources, each iteration took approximately 9 minutes. For reference, the same inversion run with our CPU code uses two hours per iteration. This was done using only a very simple wavefield interpolation technique, saving every second timestep. Using a more sophisticated checkpointing or wavefield reconstruction method would allow us to increase this model size significantly. Our results show that ordinary gaming GPUs are a viable alternative to the expensive professional GPUs often used today, when performing large scale modeling and inversion in geophysics.

  13. Manufacturing and assembly of IWS support rib and lower bracket for ITER vacuum vessel

    NASA Astrophysics Data System (ADS)

    Laad, R.; Sarvaiya, Y.; Pathak, H. A.; Raval, J. R.; Choi, C. H.

    2017-04-01

    ITER Vacuum Vessel (VV) is made of double walls connected by ribs structure and flexible housings. Space between these walls is filled up with In Wall Shielding (IWS) blocks to (1) shield neutrons streaming out of plasma and (2) reduce toroidal magnetic field ripple. These blocks will be connected to the VV through a supporting structure of Support Rib (SR) and Lower Bracket (LB) assembly. SR and LB are two independent components manufactured from SS 316L(N)-IG, Total 1584 support ribs and 3168 lower bracket of different sizes and shapes will be manufactured for the IWS. Two lower brackets will be welded with one support rib to make an assembly. The welding between SR and LB is a full penetration welding. Total 1584 assemblies of different sizes and shapes will be manufactured. Sufficient experience gained from manufacturing and testing of mock ups, final manufacturing of IWS support rib and lower bracket has been started at the site of IWS manufacturer M/s. Avasarala Technologies Limited (ATL). This paper will describe, optimization of water jet cutting speed on IWS material, selection criteria for K type weld joint, unique features of fixture of assembly, manufacturing of Mock ups, and welding processes with NDTs.

  14. Performance analysis of the toroidal field ITER production conductors

    NASA Astrophysics Data System (ADS)

    Breschi, M.; Macioce, D.; Devred, A.

    2017-05-01

    The production of the superconducting cables for the toroidal field (TF) magnets of the ITER machine has recently been completed at the manufacturing companies selected during the previous qualification phase. The quality assurance/quality control programs that have been implemented to ensure production uniformity across numerous suppliers include performance tests of several conductor samples from selected unit lengths. The short full-size samples (4 m long) were subjected to DC and AC tests in the SULTAN facility at CRPP in Villigen, Switzerland. In a previous work the results of the tests of the conductor performance qualification samples were reported. This work reports the analyses of the results of the tests of the production conductor samples. The results reported here concern the values of current sharing temperature, critical current, effective strain and n-value from the DC tests and the energy dissipated per cycle from the AC loss tests. A detailed comparison is also presented between the performance of the conductors and that of their constituting strands.

  15. On the primary variable switching technique for simulating unsaturated-saturated flows

    NASA Astrophysics Data System (ADS)

    Diersch, H.-J. G.; Perrochet, P.

    Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.

  16. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  17. Dose reduction potential of iterative reconstruction algorithms in neck CTA-a simulation study.

    PubMed

    Ellmann, Stephan; Kammerer, Ferdinand; Allmendinger, Thomas; Brand, Michael; Janka, Rolf; Hammon, Matthias; Lell, Michael M; Uder, Michael; Kramer, Manuel

    2016-10-01

    This study aimed to determine the degree of radiation dose reduction in neck CT angiography (CTA) achievable with Sinogram-affirmed iterative reconstruction (SAFIRE) algorithms. 10 consecutive patients scheduled for neck CTA were included in this study. CTA images of the external carotid arteries either were reconstructed with filtered back projection (FBP) at full radiation dose level or underwent simulated dose reduction by proprietary reconstruction software. The dose-reduced images were reconstructed using either SAFIRE 3 or SAFIRE 5 and compared with full-dose FBP images in terms of vessel definition. 5 observers performed a total of 3000 pairwise comparisons. SAFIRE allowed substantial radiation dose reductions in neck CTA while maintaining vessel definition. The possible levels of radiation dose reduction ranged from approximately 34 to approximately 90% and depended on the SAFIRE algorithm strength and the size of the vessel of interest. In general, larger vessels permitted higher degrees of radiation dose reduction, especially with higher SAFIRE strength levels. With small vessels, the superiority of SAFIRE 5 over SAFIRE 3 was lost. Neck CTA can be performed with substantially less radiation dose when SAFIRE is applied. The exact degree of radiation dose reduction should be adapted to the clinical question, in particular to the smallest vessel needing excellent definition.

  18. Scientific and technical challenges on the road towards fusion electricity

    NASA Astrophysics Data System (ADS)

    Donné, A. J. H.; Federici, G.; Litaudon, X.; McDonald, D. C.

    2017-10-01

    The goal of the European Fusion Roadmap is to deliver fusion electricity to the grid early in the second half of this century. It breaks the quest for fusion energy into eight missions, and for each of them it describes a research and development programme to address all the open technical gaps in physics and technology and estimates the required resources. It points out the needs to intensify industrial involvement and to seek all opportunities for collaboration outside Europe. The roadmap covers three periods: the short term, which runs parallel to the European Research Framework Programme Horizon 2020, the medium term and the long term. ITER is the key facility of the roadmap as it is expected to achieve most of the important milestones on the path to fusion power. Thus, the vast majority of present resources are dedicated to ITER and its accompanying experiments. The medium term is focussed on taking ITER into operation and bringing it to full power, as well as on preparing the construction of a demonstration power plant DEMO, which will for the first time demonstrate fusion electricity to the grid around the middle of this century. Building and operating DEMO is the subject of the last roadmap phase: the long term. Clearly, the Fusion Roadmap is tightly connected to the ITER schedule. Three key milestones are the first operation of ITER, the start of the DT operation in ITER and reaching the full performance at which the thermal fusion power is 10 times the power put in to the plasma. The Engineering Design Activity of DEMO needs to start a few years after the first ITER plasma, while the start of the construction phase will be a few years after ITER reaches full performance. In this way ITER can give viable input to the design and development of DEMO. Because the neutron fluence in DEMO will be much higher than in ITER, it is important to develop and validate materials that can handle these very high neutron loads. For the testing of the materials, a dedicated 14 MeV neutron source is needed. This DEMO Oriented Neutron Source (DONES) is therefore an important facility to support the fusion roadmap.

  19. An implicit-iterative solution of the heat conduction equation with a radiation boundary condition

    NASA Technical Reports Server (NTRS)

    Williams, S. D.; Curry, D. M.

    1977-01-01

    For the problem of predicting one-dimensional heat transfer between conducting and radiating mediums by an implicit finite difference method, four different formulations were used to approximate the surface radiation boundary condition while retaining an implicit formulation for the interior temperature nodes. These formulations are an explicit boundary condition, a linearized boundary condition, an iterative boundary condition, and a semi-iterative boundary method. The results of these methods in predicting surface temperature on the space shuttle orbiter thermal protection system model under a variety of heating rates were compared. The iterative technique caused the surface temperature to be bounded at each step. While the linearized and explicit methods were generally more efficient, the iterative and semi-iterative techniques provided a realistic surface temperature response without requiring step size control techniques.

  20. Accuracy of iodine quantification in dual-layer spectral CT: Influence of iterative reconstruction, patient habitus and tube parameters.

    PubMed

    Sauter, Andreas P; Kopp, Felix K; Münzel, Daniela; Dangelmaier, Julia; Renz, Martin; Renger, Bernhard; Braren, Rickmer; Fingerle, Alexander A; Rummeny, Ernst J; Noël, Peter B

    2018-05-01

    Evaluation of the influence of iterative reconstruction, tube settings and patient habitus on the accuracy of iodine quantification with dual-layer spectral CT (DL-CT). A CT abdomen phantom with different extension rings and four iodine inserts (1, 2, 5 and 10 mg/ml) was scanned on a DL-CT. The phantom was scanned with tube-voltages of 120 and 140 kVp and CTDI vol of 2.5, 5, 10 and 20 mGy. Reconstructions were performed for eight levels of iterative reconstruction (i0-i7). Diagnostic dose levels are classified depending on patient-size and radiation dose. Measurements of iodine concentration showed accurate and reliable results. Taking all CTDI vol -levels into account, the mean absolute percentage difference (MAPD) showed less accuracy for low CTDI vol -levels (2.5 mGy: 34.72%) than for high CTDI vol -levels (20 mGy: 5.89%). At diagnostic dose levels, accurate quantification of iodine was possible (MAPD 3.38%). Level of iterative reconstruction did not significantly influence iodine measurements. Iodine quantification worked more accurately at a tube voltage of 140 kVp. Phantom size had a considerable effect only at low-dose-levels; at diagnostic dose levels the effect of phantom size decreased (MAPD <5% for all phantom sizes). With DL-CT, even low iodine concentrations can be accurately quantified. Accuracies are higher when diagnostic radiation doses are employed. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Improving spatial and spectral resolution of TCV Thomson scattering

    NASA Astrophysics Data System (ADS)

    Hawke, J.; Andrebe, Y.; Bertizzolo, R.; Blanchard, P.; Chavan, R.; Decker, J.; Duval, B.; Lavanchy, P.; Llobet, X.; Marlétaz, B.; Marmillod, P.; Pochon, G.; Toussaint, M.

    2017-12-01

    The recently completed MST2 upgrade to the Thomson scattering (TS) system on TCV (Tokamak à Configuration Variable) at the Swiss Plasma Center aims to provide an enhanced spatial and spectral resolution while maintaining the high level of diagnostic flexibility for the study of TCV plasmas. The MST2 (Medium Sized Tokamak) is a work program within the Eurofusion ITER physics department, aimed at exploiting Europe's medium sized tokamak programs for a better understanding of ITER physics. This upgrade to the TCV Thomson scattering system involved the installation of 40 new compact 5-channel spectrometers and modifications to the diagnostics fiber optic design. The complete redesign of the fiber optic backplane incorporates fewer larger diameter fibers, allowing for a higher resolution in both the core and edge of TCV plasmas along the laser line, with a slight decrease in the signal to noise ratio of Thomson measurements. The 40 new spectrometers added to the system are designed to cover the full range of temperatures expected in TCV, able to measure electron temperatures (Te) with high precision between (6 eV and 20 keV) . The design of these compact spectrometers stems originally from the design utilized in the MAST (Mega Amp Spherical Tokamak) TS system located in Oxfordshire, United Kingdom. This design was implemented on TCV with an overall layout of optical fibers and spectrometers to achieve an overall increase in the spatial resolution, specifically a resolution of approximately 1% of the minor radius within the plasma pedestal region. These spectrometers also enhance the diagnostic spectral resolution, especially within the plasma edge, due to the low Te measurement capabilities. These additional spectrometers allow for a much greater diagnostic flexibility, allowing for quality full Thomson profiles in 75% of TCV plasma configurations.

  2. Thermal conductivity of graphene mediated by strain and size

    DOE PAGES

    Kuang, Youdi; Shi, Sanqiang; Wang, Xinjiang; ...

    2016-06-09

    Based on first-principles calculations and full iterative solution of the linearized Boltzmann–Peierls transport equation for phonons, we systematically investigate effects of strain, size and temperature on the thermal conductivity k of suspended graphene. The calculated size-dependent and temperature-dependent k for finite samples agree well with experimental data. The results show that, contrast to the convergent room-temperature k = 5450 W/m-K of unstrained graphene at a sample size ~8 cm, k of strained graphene diverges with increasing the sample size even at high temperature. Out-of-plane acoustic phonons are responsible for the significant size effect in unstrained and strained graphene due tomore » their ultralong mean free path and acoustic phonons with wavelength smaller than 10 nm contribute 80% to the intrinsic room temperature k of unstrained graphene. Tensile strain hardens the flexural modes and increases their lifetimes, causing interesting dependence of k on sample size and strain due to the competition between boundary scattering and intrinsic phonon–phonon scattering. k of graphene can be tuned within a large range by strain for the size larger than 500 μm. These findings shed light on the nature of thermal transport in two-dimensional materials and may guide predicting and engineering k of graphene by varying strain and size.« less

  3. Characterization of a CT unit for the detection of low contrast structures

    NASA Astrophysics Data System (ADS)

    Viry, Anais; Racine, Damien; Ba, Alexandre; Becce, Fabio; Bochud, François O.; Verdun, Francis R.

    2017-03-01

    Major technological advances in CT enable the acquisition of high quality images while minimizing patient exposure. The goal of this study was to objectively compare two generations of iterative reconstruction (IR) algorithms for the detection of low contrast structures. An abdominal phantom (QRM, Germany), containing 8, 6 and 5mm-diameter spheres (with a nominal contrast of 20HU) was scanned using our standard clinical noise index settings on a GE CT: "Discovery 750 HD". Two additional rings (2.5 and 5 cm) were also added to the phantom. Images were reconstructed using FBP, ASIR-50%, and VEO (full statistical Model Based Iterative Reconstruction, MBIR). The reconstructed slice thickness was 2.5 mm except 0.625 mm for VEO reconstructions. NPS was calculated to highlight the potential noise reduction of each IR algorithm. To assess LCD (low Contrast Detectability), a Channelized Hotelling Observer (CHO) with 10 DDoG channels was used with the area under the curve (AUC) as a figure of merit. Spheres contrast was also measured. ASIR-50% allowed a noise reduction by a factor two when compared to FBP without an improvement of the LCD. VEO allowed an additional noise reduction with a thinner slice thickness compared to ASIR-50% but with a major improvement of the LCD especially for the large-sized phantom and small lesions. Contrast decreased up to 10% with the phantom size increase for FBP and ASIR-50% and remained constant with VEO. VEO is particularly interesting for LCD when dealing with large patients and small lesion sizes and when the detection task is difficult.

  4. Variable aperture-based ptychographical iterative engine method

    NASA Astrophysics Data System (ADS)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  5. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE PAGES

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2015-12-01

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  6. Design optimization of first wall and breeder unit module size for the Indian HCCB blanket module

    NASA Astrophysics Data System (ADS)

    Deepak, SHARMA; Paritosh, CHAUDHURI

    2018-04-01

    The Indian test blanket module (TBM) program in ITER is one of the major steps in the Indian fusion reactor program for carrying out the R&D activities in the critical areas like design of tritium breeding blankets relevant to future Indian fusion devices (ITER relevant and DEMO). The Indian Lead–Lithium Cooled Ceramic Breeder (LLCB) blanket concept is one of the Indian DEMO relevant TBM, to be tested in ITER as a part of the TBM program. Helium-Cooled Ceramic Breeder (HCCB) is an alternative blanket concept that consists of lithium titanate (Li2TiO3) as ceramic breeder (CB) material in the form of packed pebble beds and beryllium as the neutron multiplier. Specifically, attentions are given to the optimization of first wall coolant channel design and size of breeder unit module considering coolant pressure and thermal loads for the proposed Indian HCCB blanket based on ITER relevant TBM and loading conditions. These analyses will help proceeding further in designing blankets for loads relevant to the future fusion device.

  7. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  8. Numerical solutions of 2-D multi-stage rotor/stator unsteady flow interactions

    NASA Astrophysics Data System (ADS)

    Yang, R.-J.; Lin, S.-J.

    1991-01-01

    The Rai method of single-stage rotor/stator flow interaction is extended to handle multistage configurations. In this study, a two-dimensional Navier-Stokes multi-zone approach was used to investigate unsteady flow interactions within two multistage axial turbines. The governing equations are solved by an iterative, factored, implicit finite-difference, upwind algorithm. Numerical accuracy is checked by investigating the effect of time step size, the effect of subiteration in the Newton-Raphson technique, and the effect of full viscous versus thin-layer approximation. Computer results compared well with experimental data. Unsteady flow interactions, wake cutting, and the associated evolution of vortical entities are discussed.

  9. Perturbation-iteration theory for analyzing microwave striplines

    NASA Technical Reports Server (NTRS)

    Kretch, B. E.

    1985-01-01

    A perturbation-iteration technique is presented for determining the propagation constant and characteristic impedance of an unshielded microstrip transmission line. The method converges to the correct solution with a few iterations at each frequency and is equivalent to a full wave analysis. The perturbation-iteration method gives a direct solution for the propagation constant without having to find the roots of a transcendental dispersion equation. The theory is presented in detail along with numerical results for the effective dielectric constant and characteristic impedance for a wide range of substrate dielectric constants, stripline dimensions, and frequencies.

  10. A methodology for image quality evaluation of advanced CT systems.

    PubMed

    Wilson, Joshua M; Christianson, Olav I; Richard, Samuel; Samei, Ehsan

    2013-03-01

    This work involved the development of a phantom-based method to quantify the performance of tube current modulation and iterative reconstruction in modern computed tomography (CT) systems. The quantification included resolution, HU accuracy, noise, and noise texture accounting for the impact of contrast, prescribed dose, reconstruction algorithm, and body size. A 42-cm-long, 22.5-kg polyethylene phantom was designed to model four body sizes. Each size was represented by a uniform section, for the measurement of the noise-power spectrum (NPS), and a feature section containing various rods, for the measurement of HU and the task-based modulation transfer function (TTF). The phantom was scanned on a clinical CT system (GE, 750HD) using a range of tube current modulation settings (NI levels) and reconstruction methods (FBP and ASIR30). An image quality analysis program was developed to process the phantom data to calculate the targeted image quality metrics as a function of contrast, prescribed dose, and body size. The phantom fabrication closely followed the design specifications. In terms of tube current modulation, the tube current and resulting image noise varied as a function of phantom size as expected based on the manufacturer specification: From the 16- to 37-cm section, the HU contrast for each rod was inversely related to phantom size, and noise was relatively constant (<5% change). With iterative reconstruction, the TTF exhibited a contrast dependency with better performance for higher contrast objects. At low noise levels, TTFs of iterative reconstruction were better than those of FBP, but at higher noise, that superiority was not maintained at all contrast levels. Relative to FBP, the NPS of iterative reconstruction exhibited an ~30% decrease in magnitude and a 0.1 mm(-1) shift in the peak frequency. Phantom and image quality analysis software were created for assessing CT image quality over a range of contrasts, doses, and body sizes. The testing platform enabled robust NPS, TTF, HU, and pixel noise measurements as a function of body size capable of characterizing the performance of reconstruction algorithms and tube current modulation techniques.

  11. Small-Scale Smart Grid Construction and Analysis

    NASA Astrophysics Data System (ADS)

    Surface, Nicholas James

    The smart grid (SG) is a commonly used catch-phrase in the energy industry yet there is no universally accepted definition. The objectives and most useful concepts have been investigated extensively in economic, environmental and engineering research by applying statistical knowledge and established theories to develop simulations without constructing physical models. In this study, a small-scale version (SSSG) is constructed to physically represent these ideas so they can be evaluated. Results of construction show data acquisition three times more expensive than the grid itself although mainly due to the incapability to downsize 70% of data acquisition costs to small-scale. Experimentation on the fully assembled grid exposes the limitations of low cost modified sine wave power, significant enough to recommend pure sine wave investment in future SSSG iterations. Findings can be projected to full-size SG at a ratio of 1:10, based on the appliance representing average US household peak daily load. However this exposes disproportionalities in the SSSG compared with previous SG investigations and recommended changes for future iterations are established to remedy this issue. Also discussed are other ideas investigated in the literature and their suitability for SSSG incorporation. It is highly recommended to develop a user-friendly bidirectional charger to more accurately represent vehicle-to-grid (V2G) infrastructure. Smart homes, BEV swap stations and pumped hydroelectric storage can also be researched on future iterations of the SSSG.

  12. Further investigation on "A multiplicative regularization for force reconstruction"

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.

  13. An iterative reduced field-of-view reconstruction for periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI.

    PubMed

    Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J

    2015-10-01

    To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.

  14. ITER Cryoplant Infrastructures

    NASA Astrophysics Data System (ADS)

    Fauve, E.; Monneret, E.; Voigt, T.; Vincent, G.; Forgeas, A.; Simon, M.

    2017-02-01

    The ITER Tokamak requires an average 75 kW of refrigeration power at 4.5 K and 600 kW of refrigeration Power at 80 K to maintain the nominal operation condition of the ITER thermal shields, superconducting magnets and cryopumps. This is produced by the ITER Cryoplant, a complex cluster of refrigeration systems including in particular three identical Liquid Helium Plants and two identical Liquid Nitrogen Plants. Beyond the equipment directly part of the Cryoplant, colossal infrastructures are required. These infrastructures account for a large part of the Cryoplants lay-out, budget and engineering efforts. It is ITER Organization responsibility to ensure that all infrastructures are adequately sized and designed to interface with the Cryoplant. This proceeding presents the overall architecture of the cryoplant. It provides order of magnitude related to the cryoplant building and utilities: electricity, cooling water, heating, ventilation and air conditioning (HVAC).

  15. Improved evaluation of optical depth components from Langley plot data

    NASA Technical Reports Server (NTRS)

    Biggar, S. F.; Gellman, D. I.; Slater, P. N.

    1990-01-01

    A simple, iterative procedure to determine the optical depth components of the extinction optical depth measured by a solar radiometer is presented. Simulated data show that the iterative procedure improves the determination of the exponent of a Junge law particle size distribution. The determination of the optical depth due to aerosol scattering is improved as compared to a method which uses only two points from the extinction data. The iterative method was used to determine spectral optical depth components for June 11-13, 1988 during the MAC III experiment.

  16. Erosion simulation of first wall beryllium armour under ITER transient heat loads

    NASA Astrophysics Data System (ADS)

    Bazylev, B.; Janeschitz, G.; Landman, I.; Pestchanyi, S.; Loarte, A.

    2009-04-01

    The beryllium is foreseen as plasma facing armour for the first wall in the ITER in form of Be-clad blanket modules in macrobrush design with brush size about 8-10 cm. In ITER significant heat loads during transient events (TE) are expected at the main chamber wall that may leads to the essential damage of the Be armour. The main mechanisms of metallic target damage remain surface melting and melt motion erosion, which determines the lifetime of the plasma facing components. Melting thresholds and melt layer depth of the Be armour under transient loads are estimated for different temperatures of the bulk Be and different shapes of transient loads. The melt motion damages of Be macrobrush armour caused by the tangential friction force and the Lorentz force are analyzed for bulk Be and different sizes of Be-brushes. The damage of FW under radiative loads arising during mitigated disruptions is numerically simulated.

  17. Variable aperture-based ptychographical iterative engine method.

    PubMed

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  18. Inferring the demographic history from DNA sequences: An importance sampling approach based on non-homogeneous processes.

    PubMed

    Ait Kaci Azzou, S; Larribe, F; Froda, S

    2016-10-01

    In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.

  19. A multi-reader in vitro study using porcine kidneys to determine the impact of integrated circuit detectors and iterative reconstruction on the detection accuracy, size measurement, and radiation dose for small (<4 mm) renal stones.

    PubMed

    Wells, Michael L; Froemming, Adam T; Kawashima, Akira; Vrtiska, Terri J; Kim, Bohyun; Hartman, Robert P; Holmes, David R; Carter, Rickey E; Bartley, Adam C; Leng, Shuai; McCollough, Cynthia H; Fletcher, Joel G

    2017-08-01

    Background Detection of small renal calculi has benefitted from recent advances in computed tomography (CT) scanner design. Information regarding observer performance when using state-of-the-art CT scanners for this application is needed. Purpose To assess observer performance and the impact of radiation dose for detection and size measurement of <4 mm renal stones using CT with integrated circuit detectors and iterative reconstruction. Material and Methods Twenty-nine <4 mm calcium oxalate stones were randomly placed in 20 porcine kidneys in an anthropomorphic phantom. Four radiologists used a workstation to record each calculus detection and size. JAFROC Figure of Merit (FOM), sensitivity, false positive detections, and calculus size were calculated. Results Mean calculus size was 2.2 ± 0.7 mm. The CTDI vol values corresponding to the automatic exposure control settings of 160, 80, 40, 25, and 10 Quality Reference mAs (QRM) were 15.2, 7.9, 4.2, 2.7, and 1.3 mGy, respectively. JAFROC FOM was ≥ 0.97 at ≥ 80 QRM, ≥ 0.89 at ≥ 25 QRM, and was inferior to routine dose (160 QRM) at 10 QRM (0.72, P < 0.05). Per-calculus sensitivity remained ≥ 85% for every reader at ≥ 25 QRM. Mean total false positive detections per reader were ≤ 3 at ≥ 80 QRM, but increased substantially for two readers ( ≥ 12) at ≤ 40 QRM. Measured calculus size significantly decreased at ≤ 25 QRM ( P ≤ 0.01). Conclusion Using low dose renal CT with iterative reconstruction and ≥ 25 QRM results in high sensitivity, but false positive detections increase for some readers at very low dose levels (≤ 40 QRM). At very low doses with iterative reconstruction, measured calculus size will artifactually decrease.

  20. Applicability of the iterative technique for cardiac resynchronization therapy optimization: full-disclosure, 50-sequential-patient dataset of transmitral Doppler traces, with implications for future research design and guidelines.

    PubMed

    Jones, Siana; Shun-Shin, Matthew J; Cole, Graham D; Sau, Arunashis; March, Katherine; Williams, Suzanne; Kyriacou, Andreas; Hughes, Alun D; Mayet, Jamil; Frenneaux, Michael; Manisty, Charlotte H; Whinnett, Zachary I; Francis, Darrel P

    2014-04-01

    Full-disclosure study describing Doppler patterns during iterative atrioventricular delay (AVD) optimization of biventricular pacemakers (cardiac resynchronization therapy, CRT). Doppler traces of the first 50 eligible patients undergoing iterative Doppler AVD optimization in the BRAVO trial were examined. Three experienced observers classified conformity to guideline-described patterns. Each observer then selected the optimum AVD on two separate occasions: blinded and unblinded to AVD. Four Doppler E-A patterns occurred: A (always merged, 18% of patients), B (incrementally less fusion at short AVDs, 12%), C (full separation at short AVDs, as described by the guidelines, 28%), and D (always separated, 42%). In Groups A and D (60%), the iterative guidelines therefore cannot specify one single AVD. On the kappa scale (0 = chance alone; 1 = perfect agreement), observer agreement for the ideal AVD in Classes B and C was poor (0.32) and appeared worse in Groups A and D (0.22). Blinding caused the scattering of the AVD selected as optimal to widen (standard deviation rising from 37 to 49 ms, P < 0.001). By blinding 28% of the selected optimum AVDs were ≤60 or ≥200 ms. All 50 Doppler datasets are presented, to support future methodological testing. In most patients, the iterative method does not clearly specify one AVD. In all the patients, agreement on the ideal AVD between skilled observers viewing identical images is poor. The iterative protocol may successfully exclude some extremely unsuitable AVDs, but so might simply accepting factory default. Irreproducibility of the gold standard also prevents alternative physiological optimization methods from being validated honestly.

  1. Validation of the thermal transport model used for ITER startup scenario predictions with DIII-D experimental data

    DOE PAGES

    Casper, T. A.; Meyer, W. H.; Jackson, G. L.; ...

    2010-12-08

    We are exploring characteristics of ITER startup scenarios in similarity experiments conducted on the DIII-D Tokamak. In these experiments, we have validated scenarios for the ITER current ramp up to full current and developed methods to control the plasma parameters to achieve stability. Predictive simulations of ITER startup using 2D free-boundary equilibrium and 1D transport codes rely on accurate estimates of the electron and ion temperature profiles that determine the electrical conductivity and pressure profiles during the current rise. Here we present results of validation studies that apply the transport model used by the ITER team to DIII-D discharge evolutionmore » and comparisons with data from our similarity experiments.« less

  2. Beamforming Based Full-Duplex for Millimeter-Wave Communication

    PubMed Central

    Liu, Xiao; Xiao, Zhenyu; Bai, Lin; Choi, Jinho; Xia, Pengfei; Xia, Xiang-Gen

    2016-01-01

    In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. PMID:27455256

  3. Knobology in use: an experimental evaluation of ergonomics recommendations.

    PubMed

    Overgård, Kjell Ivar; Fostervold, Knut Inge; Bjelland, Hans Vanhauwaert; Hoff, Thomas

    2007-05-01

    The scientific basis for ergonomics recommendations for controls has usually not been related to active goal-directed use. The present experiment tests how different knob sizes and torques affect operator performance. The task employed is to control a pointer by the use of a control knob, and is as such an experimentally defined goal-directed task relevant to machine systems in general. Duration of use, error associated with use (overshooting of the goal area) and movement reproduction were used as performance measures. Significant differences between knob sizes were found for movement reproduction. High torques led to less overshooting as opposed to low torques. The results from duration of use showed a tendency that the differences between knob sizes were reduced from the first iteration to the second iteration. The present results indicate that the ergonomically recommended ranges of knob sizes might differently affect operator performance.

  4. Evaluating user reputation in online rating systems via an iterative group-based ranking method

    NASA Astrophysics Data System (ADS)

    Gao, Jian; Zhou, Tao

    2017-05-01

    Reputation is a valuable asset in online social lives and it has drawn increased attention. Due to the existence of noisy ratings and spamming attacks, how to evaluate user reputation in online rating systems is especially significant. However, most of the previous ranking-based methods either follow a debatable assumption or have unsatisfied robustness. In this paper, we propose an iterative group-based ranking method by introducing an iterative reputation-allocation process into the original group-based ranking method. More specifically, the reputation of users is calculated based on the weighted sizes of the user rating groups after grouping all users by their rating similarities, and the high reputation users' ratings have larger weights in dominating the corresponding user rating groups. The reputation of users and the user rating group sizes are iteratively updated until they become stable. Results on two real data sets with artificial spammers suggest that the proposed method has better performance than the state-of-the-art methods and its robustness is considerably improved comparing with the original group-based ranking method. Our work highlights the positive role of considering users' grouping behaviors towards a better online user reputation evaluation.

  5. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    NASA Astrophysics Data System (ADS)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  6. Size scaling of negative hydrogen ion sources for fusion

    NASA Astrophysics Data System (ADS)

    Fantz, U.; Franzen, P.; Kraus, W.; Schiesko, L.; Wimmer, C.; Wünderlich, D.

    2015-04-01

    The RF-driven negative hydrogen ion source (H-, D-) for the international fusion experiment ITER has a width of 0.9 m and a height of 1.9 m and is based on a ⅛ scale prototype source being in operation at the IPP test facilities BATMAN and MANITU for many years. Among the challenges to meet the required parameters in a caesiated source at a source pressure of 0.3 Pa or less is the challenge in size scaling of a factor of eight. As an intermediate step a ½ scale ITER source went into operation at the IPP test facility ELISE with the first plasma in February 2013. The experience and results gained so far at ELISE allowed a size scaling study from the prototype source towards the ITER relevant size at ELISE, in which operational issues, physical aspects and the source performance is addressed, highlighting differences as well as similarities. The most ITER relevant results are: low pressure operation down to 0.2 Pa is possible without problems; the magnetic filter field created by a current in the plasma grid is sufficient to reduce the electron temperature below the target value of 1 eV and to reduce together with the bias applied between the differently shaped bias plate and the plasma grid the amount of co-extracted electrons. An asymmetry of the co-extracted electron currents in the two grid segments is measured, varying strongly with filter field and bias. Contrary to the prototype source, a dedicated plasma drift in vertical direction is not observed. As in the prototype source, the performance in deuterium is limited by the amount of co-extracted electrons in short as well as in long pulse operation. Caesium conditioning is much harder in deuterium than in hydrogen for which fast and reproducible conditioning is achieved. First estimates reveal a caesium consumption comparable to the one in the prototype source despite the large size.

  7. A hybrid binary particle swarm optimization for large capacitated multi item multi level lot sizing (CMIMLLS) problem

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.

    2016-09-01

    The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.

  8. ITER Status and Plans

    NASA Astrophysics Data System (ADS)

    Greenfield, Charles M.

    2017-10-01

    The US Burning Plasma Organization is pleased to welcome Dr. Bernard Bigot, who will give an update on progress in the ITER Project. Dr. Bigot took over as Director General of the ITER Organization in early 2015 following a distinguished career that included serving as Chairman and CEO of the French Alternative Energies and Atomic Energy Commission and as High Commissioner for ITER in France. During his tenure at ITER the project has moved into high gear, with rapid progress evident on the construction site and preparation of a staged schedule and a research plan leading from where we are today through all the way to full DT operation. In an unprecedented international effort, seven partners ``China, the European Union, India, Japan, Korea, Russia and the United States'' have pooled their financial and scientific resources to build the biggest fusion reactor in history. ITER will open the way to the next step: a demonstration fusion power plant. All DPP attendees are welcome to attend this ITER town meeting.

  9. Adaptive and iterative methods for simulations of nanopores with the PNP-Stokes equations

    NASA Astrophysics Data System (ADS)

    Mitscha-Baude, Gregor; Buttinger-Kreuzhuber, Andreas; Tulzer, Gerhard; Heitzinger, Clemens

    2017-06-01

    We present a 3D finite element solver for the nonlinear Poisson-Nernst-Planck (PNP) equations for electrodiffusion, coupled to the Stokes system of fluid dynamics. The model serves as a building block for the simulation of macromolecule dynamics inside nanopore sensors. The source code is released online at http://github.com/mitschabaude/nanopores. We add to existing numerical approaches by deploying goal-oriented adaptive mesh refinement. To reduce the computation overhead of mesh adaptivity, our error estimator uses the much cheaper Poisson-Boltzmann equation as a simplified model, which is justified on heuristic grounds but shown to work well in practice. To address the nonlinearity in the full PNP-Stokes system, three different linearization schemes are proposed and investigated, with two segregated iterative approaches both outperforming a naive application of Newton's method. Numerical experiments are reported on a real-world nanopore sensor geometry. We also investigate two different models for the interaction of target molecules with the nanopore sensor through the PNP-Stokes equations. In one model, the molecule is of finite size and is explicitly built into the geometry; while in the other, the molecule is located at a single point and only modeled implicitly - after solution of the system - which is computationally favorable. We compare the resulting force profiles of the electric and velocity fields acting on the molecule, and conclude that the point-size model fails to capture important physical effects such as the dependence of charge selectivity of the sensor on the molecule radius.

  10. Comparison of a 3-D GPU-Assisted Maxwell Code and Ray Tracing for Reflectometry on ITER

    NASA Astrophysics Data System (ADS)

    Gady, Sarah; Kubota, Shigeyuki; Johnson, Irena

    2015-11-01

    Electromagnetic wave propagation and scattering in magnetized plasmas are important diagnostics for high temperature plasmas. 1-D and 2-D full-wave codes are standard tools for measurements of the electron density profile and fluctuations; however, ray tracing results have shown that beam propagation in tokamak plasmas is inherently a 3-D problem. The GPU-Assisted Maxwell Code utilizes the FDTD (Finite-Difference Time-Domain) method for solving the Maxwell equations with the cold plasma approximation in a 3-D geometry. Parallel processing with GPGPU (General-Purpose computing on Graphics Processing Units) is used to accelerate the computation. Previously, we reported on initial comparisons of the code results to 1-D numerical and analytical solutions, where the size of the computational grid was limited by the on-board memory of the GPU. In the current study, this limitation is overcome by using domain decomposition and an additional GPU. As a practical application, this code is used to study the current design of the ITER Low Field Side Reflectometer (LSFR) for the Equatorial Port Plug 11 (EPP11). A detailed examination of Gaussian beam propagation in the ITER edge plasma will be presented, as well as comparisons with ray tracing. This work was made possible by funding from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work is supported by the US DOE Contract No.DE-AC02-09CH11466 and DE-FG02-99-ER54527.

  11. Iterative Purification and Effect Size Use with Logistic Regression for Differential Item Functioning Detection

    ERIC Educational Resources Information Center

    French, Brian F.; Maller, Susan J.

    2007-01-01

    Two unresolved implementation issues with logistic regression (LR) for differential item functioning (DIF) detection include ability purification and effect size use. Purification is suggested to control inaccuracies in DIF detection as a result of DIF items in the ability estimate. Additionally, effect size use may be beneficial in controlling…

  12. Controlled iterative cross-coupling: on the way to the automation of organic synthesis.

    PubMed

    Wang, Congyang; Glorius, Frank

    2009-01-01

    Repetition does not hurt! New strategies for the modulation of the reactivity of difunctional building blocks are discussed, allowing the palladium-catalyzed controlled iterative cross-coupling and, thus, the efficient formation of complex molecules of defined size and structure (see scheme). As in peptide synthesis, this development will enable the automation of these reactions. M(PG)=protected metal, M(act)=metal.

  13. Studies on Flat Sandwich-type Self-Powered Detectors for Flux Measurements in ITER Test Blanket Modules

    NASA Astrophysics Data System (ADS)

    Raj, Prasoon; Angelone, Maurizio; Döring, Toralf; Eberhardt, Klaus; Fischer, Ulrich; Klix, Axel; Schwengner, Ronald

    2018-01-01

    Neutron and gamma flux measurements in designated positions in the test blanket modules (TBM) of ITER will be important tasks during ITER's campaigns. As part of the ongoing task on development of nuclear instrumentation for application in European ITER TBMs, experimental investigations on self-powered detectors (SPD) are undertaken. This paper reports the findings of neutron and photon irradiation tests performed with a test SPD in flat sandwich-like geometry. Whereas both neutrons and gammas can be detected with appropriate optimization of geometries, materials and sizes of the components, the present sandwich-like design is more sensitive to gammas than 14 MeV neutrons. Range of SPD current signals achievable under TBM conditions are predicted based on the SPD sensitivities measured in this work.

  14. Tritium proof-of-principle pellet injector: Phase 2

    NASA Astrophysics Data System (ADS)

    Fisher, P. W.; Gouge, M. J.

    1995-03-01

    As part of the International Thermonuclear Engineering Reactor (ITER) plasma fueling development program, Oak Ridge National Laboratory (ORNL) has fabricated a pellet injection system to test the mechanical and thermal properties of extruded tritium. This repeating, single-stage, pneumatic injector, called the Tritium-Proof-of-Principle Phase-2 (TPOP-2) Pellet Injector, has a piston-driven mechanical extruder and is designed to extrude hydrogenic pellets sized for the ITER device. The TPOP-II program has the following development goals: evaluate the feasibility of extruding tritium and DT mixtures for use in future pellet injection systems; determine the mechanical and thermal properties of tritium and DT extrusions; integrate, test and evaluate the extruder in a repeating, single-stage light gas gun sized for the ITER application (pellet diameter approximately 7-8 mm); evaluate options for recycling propellant and extruder exhaust gas; evaluate operability and reliability of ITER prototypical fueling systems in an environment of significant tritium inventory requiring secondary and room containment systems. In initial tests with deuterium feed at ORNL, up to thirteen pellets have been extruded at rates up to 1 Hz and accelerated to speeds of order 1.0-1.1 km/s using hydrogen propellant gas at a supply pressure of 65 bar. The pellets are typically 7.4 mm in diameter and up to 11 mm in length and are the largest cryogenic pellets produced by the fusion program to date. These pellets represent about a 11% density perturbation to ITER. Hydrogenic pellets will be used in ITER to sustain the fusion power in the plasma core and may be crucial in reducing first wall tritium inventories by a process called isotopic fueling where tritium-rich pellets fuel the burning plasma core and deuterium gas fuels the edge.

  15. Strategy Guideline: HVAC Equipment Sizing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burdick, A.

    The heating, ventilation, and air conditioning (HVAC) system is arguably the most complex system installed in a house and is a substantial component of the total house energy use. A right-sized HVAC system will provide the desired occupant comfort and will run efficiently. This Strategy Guideline discusses the information needed to initially select the equipment for a properly designed HVAC system. Right-sizing of an HVAC system involves the selection of equipment and the design of the air distribution system to meet the accurate predicted heating and cooling loads of the house. Right-sizing the HVAC system begins with an accurate understandingmore » of the heating and cooling loads on a space; however, a full HVAC design involves more than just the load estimate calculation - the load calculation is the first step of the iterative HVAC design procedure. This guide describes the equipment selection of a split system air conditioner and furnace for an example house in Chicago, IL as well as a heat pump system for an example house in Orlando, Florida. The required heating and cooling load information for the two example houses was developed in the Department of Energy Building America Strategy Guideline: Accurate Heating and Cooling Load Calculations.« less

  16. Electromagnetic scattering of large structures in layered earths using integral equations

    NASA Astrophysics Data System (ADS)

    Xiong, Zonghou; Tripp, Alan C.

    1995-07-01

    An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.

  17. Layout compliance for triple patterning lithography: an iterative approach

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Garreton, Gilda; Pan, David Z.

    2014-10-01

    As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.

  18. Alternate Design of ITER Cryostat Skirt Support System

    NASA Astrophysics Data System (ADS)

    Pandey, Manish Kumar; Jha, Saroj Kumar; Gupta, Girish Kumar; Bhattacharya, Avik; Jogi, Gaurav; Bhardwaj, Anil Kumar

    2017-04-01

    The skirt support of ITER cryostat is a support system which takes all the load of cryostat cylinder and dome during normal and operational condition. The present design of skirt support has full penetration weld joints at the bottom (shell to horizontal plate joint). To fulfil the requirements of tolerances and control the welding distortions, we have proposed to change the full penetration weld into fillet weld. A detail calculation is done to check the feasibility and structural impact due to proposed design. The calculations provide the size requirements of fillet weld. To verify the structural integrity during most severe load case, finite element analysis (FEA) has been done in line with ASME section VIII division 2 [1]. By FEA ‘Plastic Collapse’ and ‘Local Failure’ modes has been assessed. 5° sector of skirt clamp has been modelled in CATIA V5 R21 and used in FEA. Fillet weld at shell to horizontal plate joint has been modelled and symmetry boundary condition at ± 2.5° applied. ‘Elastic Plastic Analysis’ has been performed for the most severe loading case i.e. Category IV loading. The alternate design of Cryostat Skirt support system has been found safe by analysis against Plastic collapse and Local Failure Modes with load proportionality factor 2.3. Alternate design of Cryostat skirt support system has been done and validated by FEA. As per alternate design, the proposal of fillet weld has been implemented in manufacturing.

  19. Progress and achievements of R&D activities for the ITER vacuum vessel

    NASA Astrophysics Data System (ADS)

    Nakahira, M.; Takahashi, H.; Koizumi, K.; Onozuka, M.; Ioki, K.

    2001-04-01

    The Full Scale Sector Model Project, which was initiated in 1995 as one of the Seven Large Projects for ITER R&D, has been continued with the joint effort of the ITER Joint Central Team and the Japanese, Russian Federation and United States Home Teams. The fabrication of a full scale 18° toroidal sector, which is composed of two 9° sectors spliced at the port centre, was successfully completed in September 1997 with a dimensional accuracy of +/-3 mm for the total height and total width. Both sectors were shipped to the test site at the Japan Atomic Energy Research Institute and the integration test of the sectors was begun in October 1997. The integration test involves the adjustment of field joints, automatic narrow gap tungsten inert gas welding of field joints with splice plates and inspection of the joints by ultrasonic testing, as required for the initial assembly of the ITER vacuum vessel. This first demonstration of field joint welding and the performance test of the mechanical characteristics were completed in May 1998, and all the results obtained have satisfied the ITER design. In addition to these tests, integration with the midplane port extension fabricated by the Russian Home Team by using a fully remotized welding and cutting system developed by the US Home Team was completed in March 2000. The article describes the progress, achievements and latest status of the R&D activities for the ITER vacuum vessel.

  20. Power Radiated from ITER and CIT by Impurities

    DOE R&D Accomplishments Database

    Cummings, J.; Cohen, S. A.; Hulse, R.; Post, D. E.; Redi, M. H.; Perkins, J.

    1990-07-01

    The MIST code has been used to model impurity radiation from the edge and core plasmas in ITER and CIT. A broad range of parameters have been varied, including Z{sub eff}, impurity species, impurity transport coefficients, and plasma temperature and density profiles, especially at the edge. For a set of these parameters representative of the baseline ITER ignition scenario, it is seen that impurity radiation, which is produced in roughly equal amounts by the edge and core regions, can make a major improvement in divertor operation without compromising core energy confinement. Scalings of impurity radiation with atomic number and machine size are also discussed.

  1. Fusion Breeding for Sustainable, Mid Century, Carbon Free Power

    NASA Astrophysics Data System (ADS)

    Manheimer, Wallace

    2015-11-01

    If ITER achieves Q ~10, it is still very far from useful fusion. The fusion power, and the driver power will allow only a small amount of power to be delivered, <~50MW for an ITER scale tokamak. It is unlikely, considering ``conservative design rules'' that tokamaks can ever be economical pure fusion power producers. Considering the status of other magnetic fusion concepts, it is also very unlikely that any alternate concept will either. Laser fusion does not seem to be constrained by any conservative design rules, but considering the failure of NIF to achhieve ignition, at this point it has many more obstacles to overcome than magnetic fusion. One way out of this dilemma is to use an ITER size tokamak, or a NIF size laser, as a fuel breeder for searate nuclear reactors. Hence ITER and NIF become ends in themselves, instead of steps to who knows what DEMO decades later. Such a tokamak can easily live within the consrtaints of conservative design rules. This has led the author to propose ``The Energy Park'' a sustainable, carbon free, economical, and environmently viable power source without prolifertion risk. It is one fusion breeder fuels 5 conventional nuclear reactors, and one fast neutron reactor burns the actinide wastes.

  2. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  3. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  4. Progressing in cable-in-conduit for fusion magnets: from ITER to low cost, high performance DEMO

    NASA Astrophysics Data System (ADS)

    Uglietti, D.; Sedlak, K.; Wesche, R.; Bruzzone, P.; Muzzi, L.; della Corte, A.

    2018-05-01

    The performance of ITER toroidal field (TF) conductors still have a significant margin for improvement because the effective strain between ‑0.62% and ‑0.95% limits the strands’ critical current between 15% and 45% of the maximum achievable. Prototype Nb3Sn cable-in-conduit conductors have been designed, manufactured and tested in the frame of the EUROfusion DEMO activities. In these conductors the effective strain has shown a clear improvement with respect to the ITER conductors, reaching values between ‑0.55% and ‑0.28%, resulting in a strand critical current which is two to three times higher than in ITER conductors. In terms of the amount of Nb3Sn strand required for the construction of the DEMO TF magnet system, such improvement may lead to a reduction of at least a factor of two with respect to a similar magnet built with ITER type conductors; a further saving of Nb3Sn is possible if graded conductors/windings are employed. In the best case the DEMO TF magnet could require fewer Nb3Sn strands than the ITER one, despite the larger size of DEMO. Moreover high performance conductors could be operated at higher fields than ITER TF conductors, enabling the construction of low cost, compact, high field tokamaks.

  5. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    NASA Astrophysics Data System (ADS)

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  6. A phantom design for assessment of detectability in PET imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollenweber, Scott D., E-mail: scott.wollenweber@g

    2016-09-15

    Purpose: The primary clinical role of positron emission tomography (PET) imaging is the detection of anomalous regions of {sup 18}F-FDG uptake, which are often indicative of malignant lesions. The goal of this work was to create a task-configurable fillable phantom for realistic measurements of detectability in PET imaging. Design goals included simplicity, adjustable feature size, realistic size and contrast levels, and inclusion of a lumpy (i.e., heterogeneous) background. Methods: The detection targets were hollow 3D-printed dodecahedral nylon features. The exostructure sphere-like features created voids in a background of small, solid non-porous plastic (acrylic) spheres inside a fillable tank. The featuresmore » filled at full concentration while the background concentration was reduced due to filling only between the solid spheres. Results: Multiple iterations of feature size and phantom construction were used to determine a configuration at the limit of detectability for a PET/CT system. A full-scale design used a 20 cm uniform cylinder (head-size) filled with a fixed pattern of features at a contrast of approximately 3:1. Known signal-present and signal-absent PET sub-images were extracted from multiple scans of the same phantom and with detectability in a challenging (i.e., useful) range. These images enabled calculation and comparison of the quantitative observer detectability metrics between scanner designs and image reconstruction methods. The phantom design has several advantages including filling simplicity, wall-less contrast features, the control of the detectability range via feature size, and a clinically realistic lumpy background. Conclusions: This phantom provides a practical method for testing and comparison of lesion detectability as a function of imaging system, acquisition parameters, and image reconstruction methods and parameters.« less

  7. Holographic particle size extraction by using Wigner-Ville distribution

    NASA Astrophysics Data System (ADS)

    Chuamchaitrakool, Porntip; Widjaja, Joewono; Yoshimura, Hiroyuki

    2014-06-01

    A new method for measuring object size from in-line holograms by using Wigner-Ville distribution (WVD) is proposed. The proposed method has advantages over conventional numerical reconstruction in that it is free from iterative process and it can extract the object size and position with only single computation of the WVD. Experimental verification of the proposed method is presented.

  8. Use of reconstructed 3D equilibria to determine onset conditions of helical cores in tokamaks for extrapolation to ITER

    NASA Astrophysics Data System (ADS)

    Wingen, A.; Wilcox, R. S.; Seal, S. K.; Unterberg, E. A.; Cianciosa, M. R.; Delgado-Aparicio, L. F.; Hirshman, S. P.; Lao, L. L.

    2018-03-01

    Large, spontaneous m/n  =  1/1 helical cores are shown to be expected in tokamaks such as ITER with extended regions of low- or reversed- magnetic shear profiles and q near 1 in the core. The threshold for this spontaneous symmetry breaking is determined using VMEC scans, beginning with reconstructed 3D equilibria from DIII-D and Alcator C-Mod based on observed internal 3D deformations. The helical core is a saturated internal kink mode (Wesson 1986 Plasma Phys. Control. Fusion 28 243); its onset threshold is shown to be proportional to (dp/dρ)/B_t2 around q  =  1. Below the threshold, applied 3D fields can drive a helical core to finite size, as in DIII-D. The helical core size thereby depends on the magnitude of the applied perturbation. Above it, a small, random 3D kick causes a bifurcation from axisymmetry and excites a spontaneous helical core, which is independent of the kick size. Systematic scans of the q-profile show that the onset threshold is very sensitive to the q-shear in the core. Helical cores occur frequently in Alcator C-Mod during ramp-up when slow current penetration results in a reversed shear q-profile, which is favorable for helical core formation. Finally, a comparison of the helical core onset threshold for discharges from DIII-D, Alcator C-Mod and ITER confirms that while DIII-D is marginally stable, Alcator C-Mod and especially ITER are highly susceptible to helical core formation without being driven by an externally applied 3D magnetic field.

  9. Use of reconstructed 3D equilibria to determine onset conditions of helical cores in tokamaks for extrapolation to ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wingen, A.; Wilcox, R. S.; Seal, S. K.

    In this paper, large, spontaneous m/n = 1/1 helical cores are shown to be expected in tokamaks such as ITER with extended regions of low- or reversed- magnetic shear profiles and q near 1 in the core. The threshold for this spontaneous symmetry breaking is determined using VMEC scans, beginning with reconstructed 3D equilibria from DIII-D and Alcator C-Mod based on observed internal 3D deformations. The helical core is a saturated internal kink mode (Wesson 1986 Plasma Phys. Control. Fusion 28 243); its onset threshold is shown to be proportional tomore » $$({\\rm d}p/{\\rm d}\\rho)/B_t^2$$ around q = 1. Below the threshold, applied 3D fields can drive a helical core to finite size, as in DIII-D. The helical core size thereby depends on the magnitude of the applied perturbation. Above it, a small, random 3D kick causes a bifurcation from axisymmetry and excites a spontaneous helical core, which is independent of the kick size. Systematic scans of the q-profile show that the onset threshold is very sensitive to the q-shear in the core. Helical cores occur frequently in Alcator C-Mod during ramp-up when slow current penetration results in a reversed shear q-profile, which is favorable for helical core formation. In conclusion, a comparison of the helical core onset threshold for discharges from DIII-D, Alcator C-Mod and ITER confirms that while DIII-D is marginally stable, Alcator C-Mod and especially ITER are highly susceptible to helical core formation without being driven by an externally applied 3D magnetic field.« less

  10. Can SNOMED CT be squeezed without losing its shape?

    PubMed

    López-García, Pablo; Schulz, Stefan

    2016-09-21

    In biomedical applications where the size and complexity of SNOMED CT become problematic, using a smaller subset that can act as a reasonable substitute is usually preferred. In a special class of use cases-like ontology-based quality assurance, or when performing scaling experiments for real-time performance-it is essential that modules show a similar shape than SNOMED CT in terms of concept distribution per sub-hierarchy. Exactly how to extract such balanced modules remains unclear, as most previous work on ontology modularization has focused on other problems. In this study, we investigate to what extent extracting balanced modules that preserve the original shape of SNOMED CT is possible, by presenting and evaluating an iterative algorithm. We used a graph-traversal modularization approach based on an input signature. To conform to our definition of a balanced module, we implemented an iterative algorithm that carefully bootstraped and dynamically adjusted the signature at each step. We measured the error for each sub-hierarchy and defined convergence as a residual sum of squares <1. Using 2000 concepts as an initial signature, our algorithm converged after seven iterations and extracted a module 4.7 % the size of SNOMED CT. Seven sub-hierarhies were either over or under-represented within a range of 1-8 %. Our study shows that balanced modules from large terminologies can be extracted using ontology graph-traversal modularization techniques under certain conditions: that the process is repeated a number of times, the input signature is dynamically adjusted in each iteration, and a moderate under/over-representation of some hierarchies is tolerated. In the case of SNOMED CT, our results conclusively show that it can be squeezed to less than 5 % of its size without any sub-hierarchy losing its shape more than 8 %, which is likely sufficient in most use cases.

  11. ELM mitigation studies in JET and implications for ITER

    NASA Astrophysics Data System (ADS)

    de La Luna, Elena

    2009-11-01

    Type I edge localized modes (ELMs) remain a serious concern for ITER because of the high transient heat and particle flux that can lead to rapid erosion of the divertor plates. This has stimulated worldwide research on exploration of different methods to avoid or at least mitigate the ELM energy loss while maintaining adequate confinement. ITER will require reliable ELM control over a wide range of operating conditions, including changes in the edge safety factor, therefore a suite of different techniques is highly desirable. In JET several techniques have been demonstrated for control the frequency and size of type I ELMs, including resonant perturbations of the edge magnetic field (RMP), ELM magnetic triggering by fast vertical movement of the plasma column (``vertical kicks'') and ELM pacing using pellet injection. In this paper we present results from recent dedicated experiments in JET focusing on integrating the different ELM mitigation methods into similar plasma scenarios. Plasma parameter scans provide comparison of the performance of the different techniques in terms of both the reduction in ELM size and on the impact of each control method on plasma confinement. The compatibility of different ELM mitigation schemes has also been investigated. The plasma response to RMP and vertical kicks during the ELM mitigation phase shares common features: the reduction in ELM size (up to a factor of 3) is accompanied by a reduction in pedestal pressure (mainly due to a loss of density) with only minor (< 10%) reduction of the stored energy. Interestingly, it has been found that the combined application of RMP and kicks leads to a reduction of the threshold perturbation level (vertical displacement in the case of the kicks) necessary for the ELM mitigation to occur. The implication of these results for ITER will be discussed.

  12. Use of reconstructed 3D equilibria to determine onset conditions of helical cores in tokamaks for extrapolation to ITER

    DOE PAGES

    Wingen, A.; Wilcox, R. S.; Seal, S. K.; ...

    2018-01-15

    In this paper, large, spontaneous m/n = 1/1 helical cores are shown to be expected in tokamaks such as ITER with extended regions of low- or reversed- magnetic shear profiles and q near 1 in the core. The threshold for this spontaneous symmetry breaking is determined using VMEC scans, beginning with reconstructed 3D equilibria from DIII-D and Alcator C-Mod based on observed internal 3D deformations. The helical core is a saturated internal kink mode (Wesson 1986 Plasma Phys. Control. Fusion 28 243); its onset threshold is shown to be proportional tomore » $$({\\rm d}p/{\\rm d}\\rho)/B_t^2$$ around q = 1. Below the threshold, applied 3D fields can drive a helical core to finite size, as in DIII-D. The helical core size thereby depends on the magnitude of the applied perturbation. Above it, a small, random 3D kick causes a bifurcation from axisymmetry and excites a spontaneous helical core, which is independent of the kick size. Systematic scans of the q-profile show that the onset threshold is very sensitive to the q-shear in the core. Helical cores occur frequently in Alcator C-Mod during ramp-up when slow current penetration results in a reversed shear q-profile, which is favorable for helical core formation. In conclusion, a comparison of the helical core onset threshold for discharges from DIII-D, Alcator C-Mod and ITER confirms that while DIII-D is marginally stable, Alcator C-Mod and especially ITER are highly susceptible to helical core formation without being driven by an externally applied 3D magnetic field.« less

  13. Multidisciplinary systems optimization by linear decomposition

    NASA Technical Reports Server (NTRS)

    Sobieski, J.

    1984-01-01

    In a typical design process major decisions are made sequentially. An illustrated example is given for an aircraft design in which the aerodynamic shape is usually decided first, then the airframe is sized for strength and so forth. An analogous sequence could be laid out for any other major industrial product, for instance, a ship. The loops in the discipline boxes symbolize iterative design improvements carried out within the confines of a single engineering discipline, or subsystem. The loops spanning several boxes depict multidisciplinary design improvement iterations. Omitted for graphical simplicity is parallelism of the disciplinary subtasks. The parallelism is important in order to develop a broad workfront necessary to shorten the design time. If all the intradisciplinary and interdisciplinary iterations were carried out to convergence, the process could yield a numerically optimal design. However, it usually stops short of that because of time and money limitations. This is especially true for the interdisciplinary iterations.

  14. Physics and technology in the ion-cyclotron range of frequency on Tore Supra and TITAN test facility: implication for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litaudon, X; Bernard, J. M.; Colas, L.

    2013-01-01

    To support the design of an ITER ion-cyclotron range of frequency heating (ICRH) system and to mitigate risks of operation in ITER, CEA has initiated an ambitious Research & Development program accompanied by experiments on Tore Supra or test-bed facility together with a significant modelling effort. The paper summarizes the recent results in the following areas: Comprehensive characterization (experiments and modelling) of a new Faraday screen concept tested on the Tore Supra antenna. A new model is developed for calculating the ICRH sheath rectification at the antenna vicinity. The model is applied to calculate the local heat flux on Toremore » Supra and ITER ICRH antennas. Full-wave modelling of ITER ICRH heating and current drive scenarios with the EVE code. With 20 MW of power, a current of 400 kA could be driven on axis in the DT scenario. Comparison between DT and DT(3He) scenario is given for heating and current drive efficiencies. First operation of CW test-bed facility, TITAN, designed for ITER ICRH components testing and could host up to a quarter of an ITER antenna. R&D of high permittivity materials to improve load of test facilities to better simulate ITER plasma antenna loading conditions.« less

  15. From Intent to Action: An Iterative Engineering Process

    ERIC Educational Resources Information Center

    Mouton, Patrice; Rodet, Jacques; Vacaresse, Sylvain

    2015-01-01

    Quite by chance, and over the course of a few haphazard meetings, a Master's degree in "E-learning Design" gradually developed in a Faculty of Economics. Its original and evolving design was the result of an iterative process carried out, not by a single Instructional Designer (ID), but by a full ID team. Over the last 10 years it has…

  16. Iterative Methods to Solve Linear RF Fields in Hot Plasma

    NASA Astrophysics Data System (ADS)

    Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo

    2014-10-01

    Most magnetic plasma confinement devices use radio frequency (RF) waves for current drive and/or heating. Numerical modeling of RF fields is an important part of performance analysis of such devices and a predictive tool aiding design and development of future devices. Prior attempts at this modeling have mostly used direct solvers to solve the formulated linear equations. Full wave modeling of RF fields in hot plasma with 3D nonuniformities is mostly prohibited, with memory demands of a direct solver placing a significant limitation on spatial resolution. Iterative methods can significantly increase spatial resolution. We explore the feasibility of using iterative methods in 3D full wave modeling. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating along test particle orbits. The wave equation is discretized using a finite difference approach. The initial guess is important in iterative methods, and we examine different initial guesses including the solution to the cold plasma wave equation. Work is supported by the U.S. DOE SBIR program.

  17. Diagnostics of the ITER neutral beam test facility.

    PubMed

    Pasqualotto, R; Serianni, G; Sonato, P; Agostini, M; Brombin, M; Croci, G; Dalla Palma, M; De Muri, M; Gazza, E; Gorini, G; Pomaro, N; Rizzolo, A; Spolaore, M; Zaniol, B

    2012-02-01

    The ITER heating neutral beam (HNB) injector, based on negative ions accelerated at 1 MV, will be tested and optimized in the SPIDER source and MITICA full injector prototypes, using a set of diagnostics not available on the ITER HNB. The RF source, where the H(-)∕D(-) production is enhanced by cesium evaporation, will be monitored with thermocouples, electrostatic probes, optical emission spectroscopy, cavity ring down, and laser absorption spectroscopy. The beam is analyzed by cooling water calorimetry, a short pulse instrumented calorimeter, beam emission spectroscopy, visible tomography, and neutron imaging. Design of the diagnostic systems is presented.

  18. The ITER bolometer diagnostic: Status and plansa)

    NASA Astrophysics Data System (ADS)

    Meister, H.; Giannone, L.; Horton, L. D.; Raupp, G.; Zeidner, W.; Grunda, G.; Kalvin, S.; Fischer, U.; Serikov, A.; Stickel, S.; Reichle, R.

    2008-10-01

    A consortium consisting of four EURATOM Associations has been set up to develop the project plan for the full development of the ITER bolometer diagnostic and to continue urgent R&D activities. An overview of the current status is given, including detector development, line-of-sight optimization, performance analysis as well as the design of the diagnostic components and their integration in ITER. This is complemented by the presentation of plans for future activities required to successfully implement the bolometer diagnostic, ranging from the detector development over diagnostic design and prototype testing to RH tools for calibration.

  19. Constructing Integrable Full-pressure Full-current Free-boundary Stellarator Magnetohydrodynamic Equilibria

    NASA Astrophysics Data System (ADS)

    Hudson, S. R.; Monticello, D. A.; Reiman, A. H.; Strickler, D. J.; Hirshman, S. P.

    2003-06-01

    For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands are guaranteed to exist. Magnetic islands break the smooth topology of nested flux surfaces and chaotic field lines result when magnetic islands overlap. An analogous case occurs with 11/2-dimension Hamiltonian systems where resonant perturbations cause singularities in the transformation to action-angle coordinates and destroy integrability. The suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Techniques for `healing' vacuum fields and fixed-boundary plasma equilibria have been developed, but what is ultimately required is a procedure for designing stellarators such that the self-consistent plasma equilibrium currents and the coil currents combine to produce an integrable magnetic field, and such a procedure is presented here for the first time. Magnetic islands in free-boundary full-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver [A.H.Reiman & H.S.Greenside, Comp. Phys. Comm., 43:157, 1986.] which iterates the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible. The method is applied to a candidate plasma and coil design for the National Compact Stellarator eXperiment [G.H.Neilson et.al., Phys. Plas., 7:1911, 2000.].

  20. Scaling the low-shear pulsatile TORVAD for pediatric heart failure

    PubMed Central

    Gohean, Jeffrey R.; Larson, Erik R.; Hsi, Brian H.; Kurusz, Mark; Smalling, Richard W.; Longoria, Raul G.

    2016-01-01

    This article provides an overview of the design challenges associated with scaling the low-shear pulsatile TORVAD ventricular assist device (VAD) for treating pediatric heart failure. A cardiovascular system model was used to determine that a 15 ml stroke volume device with a maximum flow rate of 4 L/min can provide full support to pediatric patients with body surface areas between 0.6 to 1.5 m2. Low shear stress in the blood is preserved as the device is scaled down and remains at least two orders of magnitude less than continuous flow VADs. A new magnetic linkage coupling the rotor and piston has been optimized using a finite element model (FEM) resulting in increased heat transfer to the blood while reducing the overall size of TORVAD. Motor FEM has also been used to reduce motor size and improve motor efficiency and heat transfer. FEM analysis predicts no more than 1°C temperature rise on any blood or tissue contacting surface of the device. The iterative computational approach established provides a methodology for developing a TORVAD platform technology with various device sizes for supporting the circulation of infants to adults. PMID:27832001

  1. Scaling the Low-Shear Pulsatile TORVAD for Pediatric Heart Failure.

    PubMed

    Gohean, Jeffrey R; Larson, Erik R; Hsi, Brian H; Kurusz, Mark; Smalling, Richard W; Longoria, Raul G

    This article provides an overview of the design challenges associated with scaling the low-shear pulsatile TORVAD ventricular assist device (VAD) for treating pediatric heart failure. A cardiovascular system model was used to determine that a 15 ml stroke volume device with a maximum flow rate of 4 L/min can provide full support to pediatric patients with body surface areas between 0.6 and 1.5 m. Low-shear stress in the blood is preserved as the device is scaled down and remains at least two orders of magnitude less than continuous flow VADs. A new magnetic linkage coupling the rotor and piston has been optimized using a finite element model (FEM) resulting in increased heat transfer to the blood while reducing the overall size of TORVAD. Motor FEM has also been used to reduce motor size and improve motor efficiency and heat transfer. FEM analysis predicts no more than 1°C temperature rise on any blood or tissue contacting surface of the device. The iterative computational approach established provides a methodology for developing a TORVAD platform technology with various device sizes for supporting the circulation of infants to adults.

  2. Estimating Standardized Linear Contrasts of Means with Desired Precision

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2009-01-01

    L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of…

  3. Performance of the full size nGEM detector for the SPIDER experiment

    NASA Astrophysics Data System (ADS)

    Muraro, A.; Croci, G.; Albani, G.; Claps, G.; Cavenago, M.; Cazzaniga, C.; Dalla Palma, M.; Grosso, G.; Murtas, F.; Pasqualotto, R.; Perelli Cippo, E.; Rebai, M.; Tardocchi, M.; Tollin, M.; Gorini, G.

    2016-03-01

    The ITER neutral beam test facility under construction in Padova will host two experimental devices: SPIDER, a 100 kV negative H/D RF beam source, and MITICA, a full scale, 1 MeV deuterium beam injector. SPIDER will start operations in 2016 while MITICA is expected to start during 2019. Both devices feature a beam dump used to stop the produced deuteron beam. Detection of fusion neutrons produced between beam-deuterons and dump-implanted deuterons will be used as a means to resolve the horizontal beam intensity profile. The neutron detection system will be placed right behind the beam dump, as close to the neutron emitting surface as possible thus providing the map of the neutron emission on the beam dump surface. The system uses nGEM neutron detectors. These are Gas Electron Multiplier detectors equipped with a cathode that also serves as neutron-proton converter foil. The cathode is designed to ensure that most of the detected neutrons at a point of the nGEM surface are emitted from the corresponding beamlet footprint (with dimensions of about 40×22 mm2) on the dump front surface. The size of the nGEM detector for SPIDER is 352 mm×200 mm. Several smaller size prototypes have been successfully made in the last years and the experience gained on these detectors has led to the production of the full size detector for SPIDER during 2014. This nGEM has a read-out board made of 256 pads (arranged in a 16×16 matrix) each with a dimension of 22 mm×13 mm. This paper describes the production of this detector and its tests (in terms of beam profile reconstruction capability, uniformity over the active area, gamma rejection capability and time stability) performed on the ROTAX beam-line at the ISIS spallation source (Didcot-UK).

  4. SU-D-17A-02: Four-Dimensional CBCT Using Conventional CBCT Dataset and Iterative Subtraction Algorithm of a Lung Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, E; Lasio, G; Yi, B

    2014-06-01

    Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases,more » and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539.« less

  5. Micromagnetic Simulation of Thermal Effects in Magnetic Nanostructures

    DTIC Science & Technology

    2003-01-01

    NiFe magnetic nano- elements are calculated. INTRODUCTION With decreasing size of magnetic nanostructures thermal effects become increasingly important...thermal field. The thermal field is assumed to be a Gaussian random process with the following statistical properties : (H,,,(t))=0 and (H,I.(t),H,.1(t...following property DI " =VE(M’’) - [VE(M"’)• t] t =0, for k =1.m (12) 186 The optimal path can be found using an iterative scheme. In each iteration step the

  6. First results of the ITER-relevant negative ion beam test facility ELISE (invited).

    PubMed

    Fantz, U; Franzen, P; Heinemann, B; Wünderlich, D

    2014-02-01

    An important step in the European R&D roadmap towards the neutral beam heating systems of ITER is the new test facility ELISE (Extraction from a Large Ion Source Experiment) for large-scale extraction from a half-size ITER RF source. The test facility was constructed in the last years at Max-Planck-Institut für Plasmaphysik Garching and is now operational. ELISE is gaining early experience of the performance and operation of large RF-driven negative hydrogen ion sources with plasma illumination of a source area of 1 × 0.9 m(2) and an extraction area of 0.1 m(2) using 640 apertures. First results in volume operation, i.e., without caesium seeding, are presented.

  7. Permittivity and conductivity parameter estimations using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Serrano, Jheyston O.; Ramirez, Ana B.; Abreo, Sergio A.; Sadler, Brian M.

    2018-04-01

    Full waveform inversion of Ground Penetrating Radar (GPR) data is a promising strategy to estimate quantitative characteristics of the subsurface such as permittivity and conductivity. In this paper, we propose a methodology that uses Full Waveform Inversion (FWI) in time domain of 2D GPR data to obtain highly resolved images of the permittivity and conductivity parameters of the subsurface. FWI is an iterative method that requires a cost function to measure the misfit between observed and modeled data, a wave propagator to compute the modeled data and an initial velocity model that is updated at each iteration until an acceptable decrease of the cost function is reached. The use of FWI with GPR are expensive computationally because it is based on the computation of the electromagnetic full wave propagation. Also, the commercially available acquisition systems use only one transmitter and one receiver antenna at zero offset, requiring a large number of shots to scan a single line.

  8. Magnetic-confinement fusion

    NASA Astrophysics Data System (ADS)

    Ongena, J.; Koch, R.; Wolf, R.; Zohm, H.

    2016-05-01

    Our modern society requires environmentally friendly solutions for energy production. Energy can be released not only from the fission of heavy nuclei but also from the fusion of light nuclei. Nuclear fusion is an important option for a clean and safe solution for our long-term energy needs. The extremely high temperatures required for the fusion reaction are routinely realized in several magnetic-fusion machines. Since the early 1990s, up to 16 MW of fusion power has been released in pulses of a few seconds, corresponding to a power multiplication close to break-even. Our understanding of the very complex behaviour of a magnetized plasma at temperatures between 150 and 200 million °C surrounded by cold walls has also advanced substantially. This steady progress has resulted in the construction of ITER, a fusion device with a planned fusion power output of 500 MW in pulses of 400 s. ITER should provide answers to remaining important questions on the integration of physics and technology, through a full-size demonstration of a tenfold power multiplication, and on nuclear safety aspects. Here we review the basic physics underlying magnetic fusion: past achievements, present efforts and the prospects for future production of electrical energy. We also discuss questions related to the safety, waste management and decommissioning of a future fusion power plant.

  9. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  10. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  11. Singular value decomposition for collaborative filtering on a GPU

    NASA Astrophysics Data System (ADS)

    Kato, Kimikazu; Hosino, Tikara

    2010-06-01

    A collaborative filtering predicts customers' unknown preferences from known preferences. In a computation of the collaborative filtering, a singular value decomposition (SVD) is needed to reduce the size of a large scale matrix so that the burden for the next phase computation will be decreased. In this application, SVD means a roughly approximated factorization of a given matrix into smaller sized matrices. Webb (a.k.a. Simon Funk) showed an effective algorithm to compute SVD toward a solution of an open competition called "Netflix Prize". The algorithm utilizes an iterative method so that the error of approximation improves in each step of the iteration. We give a GPU version of Webb's algorithm. Our algorithm is implemented in the CUDA and it is shown to be efficient by an experiment.

  12. Gyrokinetic equations and full f solution method based on Dirac's constrained Hamiltonian and inverse Kruskal iteration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heikkinen, J. A.; Nora, M.

    2011-02-15

    Gyrokinetic equations of motion, Poisson equation, and energy and momentum conservation laws are derived based on the reduced-phase-space Lagrangian and inverse Kruskal iteration introduced by Pfirsch and Correa-Restrepo [J. Plasma Phys. 70, 719 (2004)]. This formalism, together with the choice of the adiabatic invariant J= as one of the averaging coordinates in phase space, provides an alternative to the standard gyrokinetics. Within second order in gyrokinetic parameter, the new equations do not show explicit ponderomotivelike or polarizationlike terms. Pullback of particle information with an iterated gyrophase and field dependent gyroradius function from the gyrocenter position defined by gyroaveraged coordinates allowsmore » direct numerical integration of the gyrokinetic equations in particle simulation of the field and particles with full distribution function. As an example, gyrokinetic systems with polarization drift either present or absent in the equations of motion are considered.« less

  13. ELM mitigation techniques

    NASA Astrophysics Data System (ADS)

    Evans, T. E.

    2013-07-01

    Large edge-localized mode (ELM) control techniques must be developed to help ensure the success of burning and ignited fusion plasma devices such as tokamaks and stellarators. In full performance ITER tokamak discharges, with QDT = 10, the energy released by a single ELM could reach ˜30 MJ which is expected to result in an energy density of 10-15 MJ/m2on the divertor targets. This will exceed the estimated divertor ablation limit by a factor of 20-30. A worldwide research program is underway to develop various types of ELM control techniques in preparation for ITER H-mode plasma operations. An overview of the ELM control techniques currently being developed is discussed along with the requirements for applying these techniques to plasmas in ITER. Particular emphasis is given to the primary approaches, pellet pacing and resonant magnetic perturbation fields, currently being considered for ITER.

  14. Survival and in-vessel redistribution of beryllium droplets after ITER disruptions

    NASA Astrophysics Data System (ADS)

    Vignitchouk, L.; Ratynskaia, S.; Tolias, P.; Pitts, R. A.; De Temmerman, G.; Lehnen, M.; Kiramov, D.

    2018-07-01

    The motion and temperature evolution of beryllium droplets produced by first wall surface melting after ITER major disruptions and vertical displacement events mitigated during the current quench are simulated by the MIGRAINe dust dynamics code. These simulations employ an updated physical model which addresses droplet-plasma interaction in ITER-relevant regimes characterized by magnetized electron collection and thin-sheath ion collection, as well as electron emission processes induced by electron and high-Z ion impacts. The disruption scenarios have been implemented from DINA simulations of the time-evolving plasma parameters, while the droplet injection points are set to the first-wall locations expected to receive the highest thermal quench heat flux according to field line tracing studies. The droplet size, speed and ejection angle are varied within the range of currently available experimental and theoretical constraints, and the final quantities of interest are obtained by weighting single-trajectory output with different size and speed distributions. Detailed estimates of droplet solidification into dust grains and their subsequent deposition in the vessel are obtained. For representative distributions of the droplet injection parameters, the results indicate that at most a few percents of the beryllium mass initially injected is converted into solid dust, while the remaining mass either vaporizes or forms liquid splashes on the wall. Simulated in-vessel spatial distributions are also provided for the surviving dust, with the aim of providing guidance for planned dust diagnostic, retrieval and clean-up systems on ITER.

  15. 3-D Analysis of Flanged Joints Through Various Preload Methods Using ANSYS

    NASA Astrophysics Data System (ADS)

    Murugan, Jeyaraj Paul; Kurian, Thomas; Jayaprakash, Janardhan; Sreedharapanickar, Somanath

    2015-10-01

    Flanged joints are being employed in aerospace solid rocket motor hardware for the integration of various systems or subsystems. Hence, the design of flanged joints is very important in ensuring the integrity of motor while functioning. As these joints are subjected to higher loads due to internal pressure acting inside the motor chamber, an appropriate preload is required to be applied in this joint before subjecting it to the external load. Preload, also known as clamp load, is applied on the fastener and helps to hold the mating flanges together. Generally preload is simulated as a thermal load and the exact preload is obtained through number of iterations. Infact, more iterations are required when considering the material nonlinearity of the bolt. This way of simulation will take more computational time for generating the required preload. Now a days most commercial software packages use pretension elements for simulating the preload. This element does not require iterations for inducing the preload and it can be solved with single iteration. This approach takes less computational time and thus one can study the characteristics of the joint easily by varying the preload. When the structure contains more number of joints with different sizes of fasteners, pretension elements can be used compared to thermal load approach for simulating each size of fastener. This paper covers the details of analyses carried out simulating the preload through various options viz., preload through thermal, initial state command and pretension element etc. using ANSYS finite element package.

  16. Mini-Membrane Evaporator for Contingency Spacesuit Cooling

    NASA Technical Reports Server (NTRS)

    Makinen, Janice V.; Bue, Grant C.; Campbell, Colin; Petty, Brian; Craft, Jesse; Lynch, William; Wilkes, Robert; Vogel, Matthew

    2015-01-01

    The next-generation Advanced Extravehicular Mobility Unit (AEMU) Portable Life Support System (PLSS) is integrating a number of new technologies to improve reliability and functionality. One of these improvements is the development of the Auxiliary Cooling Loop (ACL) for contingency crewmember cooling. The ACL is a completely redundant, independent cooling system that consists of a small evaporative cooler--the Mini Membrane Evaporator (Mini-ME), independent pump, independent feedwater assembly and independent Liquid Cooling Garment (LCG). The Mini-ME utilizes the same hollow fiber technology featured in the full-sized AEMU PLSS cooling device, the Spacesuit Water Membrane Evaporator (SWME), but Mini-ME occupies only approximately 25% of the volume of SWME, thereby providing only the necessary crewmember cooling in a contingency situation. The ACL provides a number of benefits when compared with the current EMU PLSS contingency cooling technology, which relies upon a Secondary Oxygen Vessel; contingency crewmember cooling can be provided for a longer period of time, more contingency situations can be accounted for, no reliance on a Secondary Oxygen Vessel (SOV) for contingency cooling--thereby allowing a reduction in SOV size and pressure, and the ACL can be recharged-allowing the AEMU PLSS to be reused, even after a contingency event. The first iteration of Mini-ME was developed and tested in-house. Mini-ME is currently packaged in AEMU PLSS 2.0, where it is being tested in environments and situations that are representative of potential future Extravehicular Activities (EVA's). The second iteration of Mini-ME, known as Mini-ME2, is currently being developed to offer more heat rejection capability. The development of this contingency evaporative cooling system will contribute to a more robust and comprehensive AEMU PLSS.

  17. Mini-Membrane Evaporator for Contingency Spacesuit Cooling

    NASA Technical Reports Server (NTRS)

    Makinen, Janice V.; Bue, Grant C.; Campbell, Colin; Craft, Jesse; Lynch, William; Wilkes, Robert; Vogel, Matthew

    2014-01-01

    The next-generation Advanced Extravehicular Mobility Unit (AEMU) Portable Life Support System (PLSS) is integrating a number of new technologies to improve reliability and functionality. One of these improvements is the development of the Auxiliary Cooling Loop (ACL) for contingency crewmember cooling. The ACL is a completely redundant, independent cooling system that consists of a small evaporative cooler--the Mini Membrane Evaporator (Mini-ME), independent pump, independent feedwater assembly and independent Liquid Cooling Garment (LCG). The Mini-ME utilizes the same hollow fiber technology featured in the full-sized AEMU PLSS cooling device, the Spacesuit Water Membrane Evaporator (SWME), but Mini-ME occupies only 25% of the volume of SWME, thereby providing only the necessary crewmember cooling in a contingency situation. The ACL provides a number of benefits when compared with the current EMU PLSS contingency cooling technology, which relies upon a Secondary Oxygen Vessel; contingency crewmember cooling can be provided for a longer period of time, more contingency situations can be accounted for, no reliance on a Secondary Oxygen Vessel (SOV) for contingency cooling--thereby allowing a reduction in SOV size and pressure, and the ACL can be recharged-allowing the AEMU PLSS to be reused, even after a contingency event. The first iteration of Mini-ME was developed and tested in-house. Mini-ME is currently packaged in AEMU PLSS 2.0, where it is being tested in environments and situations that are representative of potential future Extravehicular Activities (EVA's). The second iteration of Mini-ME, known as Mini- ME2, is currently being developed to offer more heat rejection capability. The development of this contingency evaporative cooling system will contribute to a more robust and comprehensive AEMU PLSS.

  18. Overview of progress in European medium sized tokamaks towards an integrated plasma-edge/wall solution

    NASA Astrophysics Data System (ADS)

    Meyer, H.; Eich, T.; Beurskens, M.; Coda, S.; Hakola, A.; Martin, P.; Adamek, J.; Agostini, M.; Aguiam, D.; Ahn, J.; Aho-Mantila, L.; Akers, R.; Albanese, R.; Aledda, R.; Alessi, E.; Allan, S.; Alves, D.; Ambrosino, R.; Amicucci, L.; Anand, H.; Anastassiou, G.; Andrèbe, Y.; Angioni, C.; Apruzzese, G.; Ariola, M.; Arnichand, H.; Arter, W.; Baciero, A.; Barnes, M.; Barrera, L.; Behn, R.; Bencze, A.; Bernardo, J.; Bernert, M.; Bettini, P.; Bilková, P.; Bin, W.; Birkenmeier, G.; Bizarro, J. P. S.; Blanchard, P.; Blanken, T.; Bluteau, M.; Bobkov, V.; Bogar, O.; Böhm, P.; Bolzonella, T.; Boncagni, L.; Botrugno, A.; Bottereau, C.; Bouquey, F.; Bourdelle, C.; Brémond, S.; Brezinsek, S.; Brida, D.; Brochard, F.; Buchanan, J.; Bufferand, H.; Buratti, P.; Cahyna, P.; Calabrò, G.; Camenen, Y.; Caniello, R.; Cannas, B.; Canton, A.; Cardinali, A.; Carnevale, D.; Carr, M.; Carralero, D.; Carvalho, P.; Casali, L.; Castaldo, C.; Castejón, F.; Castro, R.; Causa, F.; Cavazzana, R.; Cavedon, M.; Cecconello, M.; Ceccuzzi, S.; Cesario, R.; Challis, C. D.; Chapman, I. T.; Chapman, S.; Chernyshova, M.; Choi, D.; Cianfarani, C.; Ciraolo, G.; Citrin, J.; Clairet, F.; Classen, I.; Coelho, R.; Coenen, J. W.; Colas, L.; Conway, G.; Corre, Y.; Costea, S.; Crisanti, F.; Cruz, N.; Cseh, G.; Czarnecka, A.; D'Arcangelo, O.; De Angeli, M.; De Masi, G.; De Temmerman, G.; De Tommasi, G.; Decker, J.; Delogu, R. S.; Dendy, R.; Denner, P.; Di Troia, C.; Dimitrova, M.; D'Inca, R.; Dorić, V.; Douai, D.; Drenik, A.; Dudson, B.; Dunai, D.; Dunne, M.; Duval, B. P.; Easy, L.; Elmore, S.; Erdös, B.; Esposito, B.; Fable, E.; Faitsch, M.; Fanni, A.; Fedorczak, N.; Felici, F.; Ferreira, J.; Février, O.; Ficker, O.; Fietz, S.; Figini, L.; Figueiredo, A.; Fil, A.; Fishpool, G.; Fitzgerald, M.; Fontana, M.; Ford, O.; Frassinetti, L.; Fridström, R.; Frigione, D.; Fuchert, G.; Fuchs, C.; Furno Palumbo, M.; Futatani, S.; Gabellieri, L.; Gałązka, K.; Galdon-Quiroga, J.; Galeani, S.; Gallart, D.; Gallo, A.; Galperti, C.; Gao, Y.; Garavaglia, S.; Garcia, J.; Garcia-Carrasco, A.; Garcia-Lopez, J.; Garcia-Munoz, M.; Gardarein, J.-L.; Garzotti, L.; Gaspar, J.; Gauthier, E.; Geelen, P.; Geiger, B.; Ghendrih, P.; Ghezzi, F.; Giacomelli, L.; Giannone, L.; Giovannozzi, E.; Giroud, C.; Gleason González, C.; Gobbin, M.; Goodman, T. P.; Gorini, G.; Gospodarczyk, M.; Granucci, G.; Gruber, M.; Gude, A.; Guimarais, L.; Guirlet, R.; Gunn, J.; Hacek, P.; Hacquin, S.; Hall, S.; Ham, C.; Happel, T.; Harrison, J.; Harting, D.; Hauer, V.; Havlickova, E.; Hellsten, T.; Helou, W.; Henderson, S.; Hennequin, P.; Heyn, M.; Hnat, B.; Hölzl, M.; Hogeweij, D.; Honoré, C.; Hopf, C.; Horáček, J.; Hornung, G.; Horváth, L.; Huang, Z.; Huber, A.; Igitkhanov, J.; Igochine, V.; Imrisek, M.; Innocente, P.; Ionita-Schrittwieser, C.; Isliker, H.; Ivanova-Stanik, I.; Jacobsen, A. S.; Jacquet, P.; Jakubowski, M.; Jardin, A.; Jaulmes, F.; Jenko, F.; Jensen, T.; Jeppe Miki Busk, O.; Jessen, M.; Joffrin, E.; Jones, O.; Jonsson, T.; Kallenbach, A.; Kallinikos, N.; Kálvin, S.; Kappatou, A.; Karhunen, J.; Karpushov, A.; Kasilov, S.; Kasprowicz, G.; Kendl, A.; Kernbichler, W.; Kim, D.; Kirk, A.; Kjer, S.; Klimek, I.; Kocsis, G.; Kogut, D.; Komm, M.; Korsholm, S. B.; Koslowski, H. R.; Koubiti, M.; Kovacic, J.; Kovarik, K.; Krawczyk, N.; Krbec, J.; Krieger, K.; Krivska, A.; Kube, R.; Kudlacek, O.; Kurki-Suonio, T.; Labit, B.; Laggner, F. M.; Laguardia, L.; Lahtinen, A.; Lalousis, P.; Lang, P.; Lauber, P.; Lazányi, N.; Lazaros, A.; Le, H. B.; Lebschy, A.; Leddy, J.; Lefévre, L.; Lehnen, M.; Leipold, F.; Lessig, A.; Leyland, M.; Li, L.; Liang, Y.; Lipschultz, B.; Liu, Y. Q.; Loarer, T.; Loarte, A.; Loewenhoff, T.; Lomanowski, B.; Loschiavo, V. P.; Lunt, T.; Lupelli, I.; Lux, H.; Lyssoivan, A.; Madsen, J.; Maget, P.; Maggi, C.; Maggiora, R.; Magnussen, M. L.; Mailloux, J.; Maljaars, B.; Malygin, A.; Mantica, P.; Mantsinen, M.; Maraschek, M.; Marchand, B.; Marconato, N.; Marini, C.; Marinucci, M.; Markovic, T.; Marocco, D.; Marrelli, L.; Martin, Y.; Solis, J. R. Martin; Martitsch, A.; Mastrostefano, S.; Mattei, M.; Matthews, G.; Mavridis, M.; Mayoral, M.-L.; Mazon, D.; McCarthy, P.; McAdams, R.; McArdle, G.; McCarthy, P.; McClements, K.; McDermott, R.; McMillan, B.; Meisl, G.; Merle, A.; Meyer, O.; Milanesio, D.; Militello, F.; Miron, I. G.; Mitosinkova, K.; Mlynar, J.; Mlynek, A.; Molina, D.; Molina, P.; Monakhov, I.; Morales, J.; Moreau, D.; Morel, P.; Moret, J.-M.; Moro, A.; Moulton, D.; Müller, H. W.; Nabais, F.; Nardon, E.; Naulin, V.; Nemes-Czopf, A.; Nespoli, F.; Neu, R.; Nielsen, A. H.; Nielsen, S. K.; Nikolaeva, V.; Nimb, S.; Nocente, M.; Nouailletas, R.; Nowak, S.; Oberkofler, M.; Oberparleiter, M.; Ochoukov, R.; Odstrčil, T.; Olsen, J.; Omotani, J.; O'Mullane, M. G.; Orain, F.; Osterman, N.; Paccagnella, R.; Pamela, S.; Pangione, L.; Panjan, M.; Papp, G.; Papřok, R.; Parail, V.; Parra, F. I.; Pau, A.; Pautasso, G.; Pehkonen, S.-P.; Pereira, A.; Perelli Cippo, E.; Pericoli Ridolfini, V.; Peterka, M.; Petersson, P.; Petrzilka, V.; Piovesan, P.; Piron, C.; Pironti, A.; Pisano, F.; Pisokas, T.; Pitts, R.; Ploumistakis, I.; Plyusnin, V.; Pokol, G.; Poljak, D.; Pölöskei, P.; Popovic, Z.; Pór, G.; Porte, L.; Potzel, S.; Predebon, I.; Preynas, M.; Primc, G.; Pucella, G.; Puiatti, M. E.; Pütterich, T.; Rack, M.; Ramogida, G.; Rapson, C.; Rasmussen, J. Juul; Rasmussen, J.; Rattá, G. A.; Ratynskaia, S.; Ravera, G.; Réfy, D.; Reich, M.; Reimerdes, H.; Reimold, F.; Reinke, M.; Reiser, D.; Resnik, M.; Reux, C.; Ripamonti, D.; Rittich, D.; Riva, G.; Rodriguez-Ramos, M.; Rohde, V.; Rosato, J.; Ryter, F.; Saarelma, S.; Sabot, R.; Saint-Laurent, F.; Salewski, M.; Salmi, A.; Samaddar, D.; Sanchis-Sanchez, L.; Santos, J.; Sauter, O.; Scannell, R.; Scheffer, M.; Schneider, M.; Schneider, B.; Schneider, P.; Schneller, M.; Schrittwieser, R.; Schubert, M.; Schweinzer, J.; Seidl, J.; Sertoli, M.; Šesnić, S.; Shabbir, A.; Shalpegin, A.; Shanahan, B.; Sharapov, S.; Sheikh, U.; Sias, G.; Sieglin, B.; Silva, C.; Silva, A.; Silva Fuglister, M.; Simpson, J.; Snicker, A.; Sommariva, C.; Sozzi, C.; Spagnolo, S.; Spizzo, G.; Spolaore, M.; Stange, T.; Stejner Pedersen, M.; Stepanov, I.; Stober, J.; Strand, P.; Šušnjara, A.; Suttrop, W.; Szepesi, T.; Tál, B.; Tala, T.; Tamain, P.; Tardini, G.; Tardocchi, M.; Teplukhina, A.; Terranova, D.; Testa, D.; Theiler, C.; Thornton, A.; Tolias, P.; Tophøj, L.; Treutterer, W.; Trevisan, G. L.; Tripsky, M.; Tsironis, C.; Tsui, C.; Tudisco, O.; Uccello, A.; Urban, J.; Valisa, M.; Vallejos, P.; Valovic, M.; Van den Brand, H.; Vanovac, B.; Varoutis, S.; Vartanian, S.; Vega, J.; Verdoolaege, G.; Verhaegh, K.; Vermare, L.; Vianello, N.; Vicente, J.; Viezzer, E.; Vignitchouk, L.; Vijvers, W. A. J.; Villone, F.; Viola, B.; Vlahos, L.; Voitsekhovitch, I.; Vondráček, P.; Vu, N. M. T.; Wagner, D.; Walkden, N.; Wang, N.; Wauters, T.; Weiland, M.; Weinzettl, V.; Westerhof, E.; Wiesenberger, M.; Willensdorfer, M.; Wischmeier, M.; Wodniak, I.; Wolfrum, E.; Yadykin, D.; Zagórski, R.; Zammuto, I.; Zanca, P.; Zaplotnik, R.; Zestanakis, P.; Zhang, W.; Zoletnik, S.; Zuin, M.; ASDEX Upgrade, the; MAST; TCV Teams

    2017-10-01

    Integrating the plasma core performance with an edge and scrape-off layer (SOL) that leads to tolerable heat and particle loads on the wall is a major challenge. The new European medium size tokamak task force (EU-MST) coordinates research on ASDEX Upgrade (AUG), MAST and TCV. This multi-machine approach within EU-MST, covering a wide parameter range, is instrumental to progress in the field, as ITER and DEMO core/pedestal and SOL parameters are not achievable simultaneously in present day devices. A two prong approach is adopted. On the one hand, scenarios with tolerable transient heat and particle loads, including active edge localised mode (ELM) control are developed. On the other hand, divertor solutions including advanced magnetic configurations are studied. Considerable progress has been made on both approaches, in particular in the fields of: ELM control with resonant magnetic perturbations (RMP), small ELM regimes, detachment onset and control, as well as filamentary scrape-off-layer transport. For example full ELM suppression has now been achieved on AUG at low collisionality with n  =  2 RMP maintaining good confinement {{H}\\text{H≤ft(98,\\text{y}2\\right)}}≈ 0.95 . Advances have been made with respect to detachment onset and control. Studies in advanced divertor configurations (Snowflake, Super-X and X-point target divertor) shed new light on SOL physics. Cross field filamentary transport has been characterised in a wide parameter regime on AUG, MAST and TCV progressing the theoretical and experimental understanding crucial for predicting first wall loads in ITER and DEMO. Conditions in the SOL also play a crucial role for ELM stability and access to small ELM regimes. In the future we will refer to the author list of the paper as the EUROfusion MST1 Team.

  19. System matrix computation vs storage on GPU: A comparative study in cone beam CT.

    PubMed

    Matenine, Dmitri; Côté, Geoffroi; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2018-02-01

    Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersection distances between the trajectories of photons and the object, also called ray tracing or system matrix computation. This work focused on the thin-ray model is aimed at comparing different system matrix handling strategies using graphical processing units (GPUs). In this work, the system matrix is modeled by thin rays intersecting a regular grid of box-shaped voxels, known to be an accurate representation of the forward projection operator in CT. However, an uncompressed system matrix exceeds the random access memory (RAM) capacities of typical computers by one order of magnitude or more. Considering the RAM limitations of GPU hardware, several system matrix handling methods were compared: full storage of a compressed system matrix, on-the-fly computation of its coefficients, and partial storage of the system matrix with partial on-the-fly computation. These methods were tested on geometries mimicking a cone beam CT (CBCT) acquisition of a human head. Execution times of three routines of interest were compared: forward projection, backprojection, and ordered-subsets convex (OSC) iteration. A fully stored system matrix yielded the shortest backprojection and OSC iteration times, with a 1.52× acceleration for OSC when compared to the on-the-fly approach. Nevertheless, the maximum problem size was bound by the available GPU RAM and geometrical symmetries. On-the-fly coefficient computation did not require symmetries and was shown to be the fastest for forward projection. It also offered reasonable execution times of about 176.4 ms per view per OSC iteration for a detector of 512 × 448 pixels and a volume of 384 3 voxels, using commodity GPU hardware. Partial system matrix storage has shown a performance similar to the on-the-fly approach, while still relying on symmetries. Partial system matrix storage was shown to yield the lowest relative performance. On-the-fly ray tracing was shown to be the most flexible method, yielding reasonable execution times. A fully stored system matrix allowed for the lowest backprojection and OSC iteration times and may be of interest for certain performance-oriented applications. © 2017 American Association of Physicists in Medicine.

  20. Investigation of the Iterative Phase Retrieval Algorithm for Interferometric Applications

    NASA Astrophysics Data System (ADS)

    Gombkötő, Balázs; Kornis, János

    2010-04-01

    Sequentially recorded intensity patterns reflected from a coherently illuminated diffuse object can be used to reconstruct the complex amplitude of the scattered beam. Several iterative phase retrieval algorithms are known in the literature to obtain the initially unknown phase from these longitudinally displaced intensity patterns. When two sequences are recorded in two different states of a centimeter sized object in optical setups that are similar to digital holographic interferometry-but omitting the reference wave-, displacement, deformation, or shape measurement is theoretically possible. To do this, the retrieved phase pattern should contain information not only about the intensities and locations of the point sources of the object surface, but their relative phase as well. Not only experiments require strict mechanical precision to record useful data, but even in simulations several parameters influence the capabilities of iterative phase retrieval, such as object to camera distance range, uniform or varying camera step sequence, speckle field characteristics, and sampling. Experiments were done to demonstrate this principle with an as large as 5×5 cm sized deformable object as well. Good initial results were obtained in an imaging setup, where the intensity pattern sequences were recorded near the image plane.

  1. Fast Acting Eddy Current Driven Valve for Massive Gas Injection on ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyttle, Mark S; Baylor, Larry R; Carmichael, Justin R

    2015-01-01

    Tokamak plasma disruptions present a significant challenge to ITER as they can result in intense heat flux, large forces from halo and eddy currents, and potential first-wall damage from the generation of multi-MeV runaway electrons. Massive gas injection (MGI) of high Z material using fast acting valves is being explored on existing tokamaks and is planned for ITER as a method to evenly distribute the thermal load of the plasma to prevent melting, control the rate of the current decay to minimize mechanical loads, and to suppress the generation of runaway electrons. A fast acting valve and accompanying power supplymore » have been designed and first test articles produced to meet the requirements for a disruption mitigation system on ITER. The test valve incorporates a flyer plate actuator similar to designs deployed on TEXTOR, ASDEX upgrade, and JET [1 3] of a size useful for ITER with special considerations to mitigate the high mechanical forces developed during actuation due to high background magnetic fields. The valve includes a tip design and all-metal valve stem sealing for compatibility with tritium and high neutron and gamma fluxes.« less

  2. Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2018-01-15

    RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in parallel tiled code implementing Nussinov's RNA folding. Experimental results, received on modern Intel multi-core processors, demonstrate that this code outperforms known closely related implementations when the length of RNA strands is bigger than 2500.

  3. Cluster Free Energies from Simple Simulations of Small Numbers of Aggregants: Nucleation of Liquid MTBE from Vapor and Aqueous Phases.

    PubMed

    Patel, Lara A; Kindt, James T

    2017-03-14

    We introduce a global fitting analysis method to obtain free energies of association of noncovalent molecular clusters using equilibrated cluster size distributions from unbiased constant-temperature molecular dynamics (MD) simulations. Because the systems simulated are small enough that the law of mass action does not describe the aggregation statistics, the method relies on iteratively determining a set of cluster free energies that, using appropriately weighted sums over all possible partitions of N monomers into clusters, produces the best-fit size distribution. The quality of these fits can be used as an objective measure of self-consistency to optimize the cutoff distance that determines how clusters are defined. To showcase the method, we have simulated a united-atom model of methyl tert-butyl ether (MTBE) in the vapor phase and in explicit water solution over a range of system sizes (up to 95 MTBE in the vapor phase and 60 MTBE in the aqueous phase) and concentrations at 273 K. The resulting size-dependent cluster free energy functions follow a form derived from classical nucleation theory (CNT) quite well over the full range of cluster sizes, although deviations are more pronounced for small cluster sizes. The CNT fit to cluster free energies yielded surface tensions that were in both cases lower than those for the simulated planar interfaces. We use a simple model to derive a condition for minimizing non-ideal effects on cluster size distributions and show that the cutoff distance that yields the best global fit is consistent with this condition.

  4. Evaluation of the effects of patient arm attenuation in SPECT cardiac perfusion imaging

    NASA Astrophysics Data System (ADS)

    Luo, Dershan; King, M. A.; Pan, Tin-Su; Xia, Weishi

    1996-12-01

    It was hypothesized that the use of attenuation correction could compensate for degradation in the uniformity of apparent localization of imaging agents seen in cardiac walls when patients are imaged with arms at their sides. Noise-free simulations of the digital MCAT phantom were employed to investigate this hypothesis. Four variations in camera size and collimation scheme were investigated. We observed that: 1) without attenuation correction, the arms had little additional influences on the uniformity of the heart for 180/spl deg/ reconstructions and caused a small increase in nonuniformity for 360/spl deg/ reconstructions, where the impact of both arms was included; 2) change in patient size had more of an impact on count uniformity than the presence of the arms, either with or without attenuation correction; 3) for a low number of iterations and large patient size, slightly better uniformity was obtained from parallel emission data than from fan-beam emission data, independent of whether parallel or fan-beam transmission data was used to reconstruct the attenuation maps; and 4) for all camera configurations, uniformity was improved with attenuation correction and, given sufficient number of iterations, it was compatible among different imaging geometry combinations. Thus, iterative algorithms can compensate for the additional attenuation imposed by larger patients or having the arms on the sides. When the arms are at the sides of the patient, however, a larger radius of rotation may be required, resulting in decreased spatial resolution.

  5. Evaluation of the effects of patient arm attenuation in SPECT cardiac perfusion imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, D.; King, M.A.; Pan, T.S.

    1996-12-01

    It was hypothesized that the use of attenuation correction could compensate for degradation in the uniformity of apparent localization of imaging agents seen in cardiac walls when patients are imaged with arms at their sides. Noise-free simulations of the digital MCAT phantom were employed to investigate this hypothesis. Four variations in camera size and collimation scheme were investigated. The authors observed that: (1) without attenuation correction, the arms had little additional influences on the uniformity of the heart for 180{degree} reconstructions and caused a small increase in nonuniformity for 360{degree} reconstructions, where the impact of both arms was included; (2)more » change in patient size had more of an impact on count uniformity than the presence of the arms, either with or without attenuation correction; (3) for a low number of iterations and large patient size, slightly better uniformity was obtained from parallel emission data than from fan-beam emission data, independent of whether parallel or fan-beam transmission data was used to reconstruct the attenuation maps; and (4) for all camera configurations, uniformity was improved with attenuation correction and, given sufficient number of iterations, it was compatible among different imaging geometry combinations. Thus, iterative algorithms can compensate for the additional attenuation imposed by larger patients or having the arms on the sides. When the arms are at the sides of the patient, however, a larger radius of rotation may be required, resulting in decreased spatial resolution.« less

  6. EVA Suit R and D for Performance Optimization

    NASA Technical Reports Server (NTRS)

    Cowley, Matthew S.; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar

    2014-01-01

    Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. To verify that new suit designs meet requirements, full prototypes must be built and tested with human subjects. However, numerous design iterations will occur before the hardware meets those requirements. Traditional draw-prototype-test paradigms for R&D are prohibitively expensive with today's shrinking Government budgets. Personnel at NASA are developing modern simulation techniques which focus on human-centric designs by creating virtual prototype simulations and fully adjustable physical prototypes of suit hardware. During the R&D design phase, these easily modifiable representations of an EVA suit's hard components will allow designers to think creatively and exhaust design possibilities before they build and test working prototypes with human subjects. It allows scientists to comprehensively benchmark current suit capabilities and limitations for existing suit sizes and sizes that do not exist. This is extremely advantageous and enables comprehensive design down-selections to be made early in the design process, enables the use of human performance as design criteria, and enables designs to target specific populations

  7. Orthobiologics in the Foot and Ankle.

    PubMed

    Temple, H Thomas; Malinin, Theodore I

    2016-12-01

    Many allogeneic biologic materials, by themselves or in combination with cells or cell products, may be transformative in healing or regeneration of musculoskeletal bone and soft tissues. By reconfiguring the size, shape, and methods of tissue preparation to improve deliverability and storage, unique iterations of traditional tissue scaffolds have emerged. These new iterations, combined with new cell technologies, have shaped an exciting platform of regenerative products that are effective and provide a bridge to newer and better methods of providing care for orthopedic foot and ankle patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Parameter Estimation for the Four Parameter Beta Distribution.

    DTIC Science & Technology

    1983-12-01

    060 1,000 ,033 -,027 -+046 -. 047 .042 1,000 ITERATIONS FCN2 USED FCN4 USED DIVERGED 4,9260 0.0000 . 4640 o0460 SAMPLE SIZE+ 10 ESTIMATOR: MME1 SEED ; 1...903 .278 1.000 -. 271 .882 .538 1.000 -. 078 -. 202 .038 -� 1.000 .050 .228 -o050 .086 .019 1.000 ITERATIONS FCN2 USED FCN4 USED DIVERGED...5.6575 33.2908 .0507 1.1332 .0002 .0000 .0000 .0007 CORRELATION COEFFICIENTS: 1.000 -. 058 1.000 -.914 . 262 1.000 -. 270 .895 .534 1.000 .021 .030 -,045

  9. Status of the ITER Electron Cyclotron Heating and Current Drive System

    NASA Astrophysics Data System (ADS)

    Darbos, Caroline; Albajar, Ferran; Bonicelli, Tullio; Carannante, Giuseppe; Cavinato, Mario; Cismondi, Fabio; Denisov, Grigory; Farina, Daniela; Gagliardi, Mario; Gandini, Franco; Gassmann, Thibault; Goodman, Timothy; Hanson, Gregory; Henderson, Mark A.; Kajiwara, Ken; McElhaney, Karen; Nousiainen, Risto; Oda, Yasuhisa; Omori, Toshimichi; Oustinov, Alexander; Parmar, Darshankumar; Popov, Vladimir L.; Purohit, Dharmesh; Rao, Shambhu Laxmikanth; Rasmussen, David; Rathod, Vipal; Ronden, Dennis M. S.; Saibene, Gabriella; Sakamoto, Keishi; Sartori, Filippo; Scherer, Theo; Singh, Narinder Pal; Strauß, Dirk; Takahashi, Koji

    2016-01-01

    The electron cyclotron (EC) heating and current drive (H&CD) system developed for the ITER is made of 12 sets of high-voltage power supplies feeding 24 gyrotrons connected through 24 transmission lines (TL), to five launchers, four located in upper ports and one at the equatorial level. Nearly all procurements are in-kind, following general ITER philosophy, and will come from Europe, India, Japan, Russia and the USA. The full system is designed to couple to the plasma 20 MW among the 24 MW generated power, at the frequency of 170 GHz, for various physics applications such as plasma start-up, central H&CD and magnetohydrodynamic (MHD) activity control. The design takes present day technology and extends toward high-power continuous operation, which represents a large step forward as compared to the present state of the art. The ITER EC system will be a stepping stone to future EC systems for DEMO and beyond.

  10. Review of the ITER diagnostics suite for erosion, deposition, dust and tritium measurements

    NASA Astrophysics Data System (ADS)

    Reichle, R.; Andrew, P.; Bates, P.; Bede, O.; Casal, N.; Choi, C. H.; Barnsley, R.; Damiani, C.; Bertalot, L.; Dubus, G.; Ferreol, J.; Jagannathan, G.; Kocan, M.; Leipold, F.; Lisgo, S. W.; Martin, V.; Palmer, J.; Pearce, R.; Philipps, V.; Pitts, R. A.; Pampin, R.; Passedat, G.; Puiu, A.; Suarez, A.; Shigin, P.; Shu, W.; Vayakis, G.; Veshchev, E.; Walsh, M.

    2015-08-01

    Dust and tritium inventories in the vacuum vessel have upper limits in ITER that are set by nuclear safety requirements. Erosion, migration and re-deposition of wall material together with fuel co-deposition will be largely responsible for these inventories. The diagnostic suite required to monitor these processes, along with the set of the corresponding measurement requirements is currently under review given the recent decision by the ITER Organization to eliminate the first carbon/tungsten (C/W) divertor and begin operations with a full-W variant Pitts et al. [1]. This paper presents the result of this review as well as the status of the chosen diagnostics.

  11. Convergent Polishing: A Simple, Rapid, Full Aperture Polishing Process of High Quality Optical Flats & Spheres

    PubMed Central

    Suratwala, Tayyab; Steele, Rusty; Feit, Michael; Dylla-Spears, Rebecca; Desjardin, Richard; Mason, Dan; Wong, Lana; Geraghty, Paul; Miller, Phil; Shen, Nan

    2014-01-01

    Convergent Polishing is a novel polishing system and method for finishing flat and spherical glass optics in which a workpiece, independent of its initial shape (i.e., surface figure), will converge to final surface figure with excellent surface quality under a fixed, unchanging set of polishing parameters in a single polishing iteration. In contrast, conventional full aperture polishing methods require multiple, often long, iterative cycles involving polishing, metrology and process changes to achieve the desired surface figure. The Convergent Polishing process is based on the concept of workpiece-lap height mismatch resulting in pressure differential that decreases with removal and results in the workpiece converging to the shape of the lap. The successful implementation of the Convergent Polishing process is a result of the combination of a number of technologies to remove all sources of non-uniform spatial material removal (except for workpiece-lap mismatch) for surface figure convergence and to reduce the number of rogue particles in the system for low scratch densities and low roughness. The Convergent Polishing process has been demonstrated for the fabrication of both flats and spheres of various shapes, sizes, and aspect ratios on various glass materials. The practical impact is that high quality optical components can be fabricated more rapidly, more repeatedly, with less metrology, and with less labor, resulting in lower unit costs. In this study, the Convergent Polishing protocol is specifically described for fabricating 26.5 cm square fused silica flats from a fine ground surface to a polished ~λ/2 surface figure after polishing 4 hr per surface on a 81 cm diameter polisher. PMID:25489745

  12. Three-dimensional reconstruction of the fast-start swimming kinematics of densely schooling fish

    PubMed Central

    Paley, Derek A.

    2012-01-01

    Information transmission via non-verbal cues such as a fright response can be quantified in a fish school by reconstructing individual fish motion in three dimensions. In this paper, we describe an automated tracking framework to reconstruct the full-body trajectories of densely schooling fish using two-dimensional silhouettes in multiple cameras. We model the shape of each fish as a series of elliptical cross sections along a flexible midline. We estimate the size of each ellipse using an iterated extended Kalman filter. The shape model is used in a model-based tracking framework in which simulated annealing is applied at each step to estimate the midline. Results are presented for eight fish with occlusions. The tracking system is currently being used to investigate fast-start behaviour of schooling fish in response to looming stimuli. PMID:21642367

  13. Evaluation of ITER MSE Viewing Optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, S; Lerner, S; Morris, K

    2007-03-26

    The Motional Stark Effect (MSE) diagnostic on ITER determines the local plasma current density by measuring the polarization angle of light resulting from the interaction of a high energy neutral heating beam and the tokamak plasma. This light signal has to be transmitted from the edge and core of the plasma to a polarization analyzer located in the port plug. The optical system should either preserve the polarization information, or it should be possible to reliably calibrate any changes induced by the optics. This LLNL Work for Others project for the US ITER Project Office (USIPO) is focused on themore » design of the viewing optics for both the edge and core MSE systems. Several design constraints were considered, including: image quality, lack of polarization aberrations, ease of construction and cost of mirrors, neutron shielding, and geometric layout in the equatorial port plugs. The edge MSE optics are located in ITER equatorial port 3 and view Heating Beam 5, and the core system is located in equatorial port 1 viewing heating beam 4. The current work is an extension of previous preliminary design work completed by the ITER central team (ITER resources were not available to complete a detailed optimization of this system, and then the MSE was assigned to the US). The optimization of the optical systems at this level was done with the ZEMAX optical ray tracing code. The final LLNL designs decreased the ''blur'' in the optical system by nearly an order of magnitude, and the polarization blur was reduced by a factor of 3. The mirror sizes were reduced with an estimated cost savings of a factor of 3. The throughput of the system was greater than or equal to the previous ITER design. It was found that optical ray tracing was necessary to accurately measure the throughput. Metal mirrors, while they can introduce polarization aberrations, were used close to the plasma because of the anticipated high heat, particle, and neutron loads. These mirrors formed an intermediate image that then was relayed out of the port plug with more ideal (dielectric) mirrors. Engineering models of the optics, port plug, and neutral beam geometry were also created, using the CATIA ITER models. Two video conference calls with the USIPO provided valuable design guidelines, such as the minimum distance of the first optic from the plasma. A second focus of the project was the calibration of the system. Several different techniques are proposed, both before and during plasma operation. Fixed and rotatable polarizers would be used to characterize the system in the no-plasma case. Obtaining the full modulation spectrum from the polarization analyzer allows measurement of polarization effects and also MHD plasma phenomena. Light from neutral beam interaction with deuterium gas (no plasma) has been found useful to determine the wavelength of each spatial channel. The status of the optical design for the edge (upper) and core (lower) systems is included in the following figure. Several issues should be addressed by a follow-on study, including whether the optical labyrinth has sufficient neutron shielding and a detailed polarization characterization of actual mirrors.« less

  14. The effect of density fluctuations on electron cyclotron beam broadening and implications for ITER

    NASA Astrophysics Data System (ADS)

    Snicker, A.; Poli, E.; Maj, O.; Guidi, L.; Köhn, A.; Weber, H.; Conway, G.; Henderson, M.; Saibene, G.

    2018-01-01

    We present state-of-the-art computations of propagation and absorption of electron cyclotron waves, retaining the effects of scattering due to electron density fluctuations. In ITER, injected microwaves are foreseen to suppress neoclassical tearing modes (NTMs) by driving current at the q=2 and q=3/2 resonant surfaces. Scattering of the beam can spoil the good localization of the absorption and thus impair NTM control capabilities. A novel tool, the WKBeam code, has been employed here in order to investigate this issue. The code is a Monte Carlo solver for the wave kinetic equation and retains diffraction, full axisymmetric tokamak geometry, determination of the absorption profile and an integral form of the scattering operator which describes the effects of turbulent density fluctuations within the limits of the Born scattering approximation. The approach has been benchmarked against the paraxial WKB code TORBEAM and the full-wave code IPF-FDMC. In particular, the Born approximation is found to be valid for ITER parameters. In this paper, we show that the radiative transport of EC beams due to wave scattering in ITER is diffusive unlike in present experiments, thus causing up to a factor of 2-4 broadening in the absorption profile. However, the broadening depends strongly on the turbulence model assumed for the density fluctuations, which still has large uncertainties.

  15. Development of a pressure based multigrid solution method for complex fluid flows

    NASA Technical Reports Server (NTRS)

    Shyy, Wei

    1991-01-01

    In order to reduce the computational difficulty associated with a single grid (SG) solution procedure, the multigrid (MG) technique was identified as a useful means for improving the convergence rate of iterative methods. A full MG full approximation storage (FMG/FAS) algorithm is used to solve the incompressible recirculating flow problems in complex geometries. The algorithm is implemented in conjunction with a pressure correction staggered grid type of technique using the curvilinear coordinates. In order to show the performance of the method, two flow configurations, one a square cavity and the other a channel, are used as test problems. Comparisons are made between the iterations, equivalent work units, and CPU time. Besides showing that the MG method can yield substantial speed-up with wide variations in Reynolds number, grid distributions, and geometry, issues such as the convergence characteristics of different grid levels, the choice of convection schemes, and the effectiveness of the basic iteration smoothers are studied. An adaptive grid scheme is also combined with the MG procedure to explore the effects of grid resolution on the MG convergence rate as well as the numerical accuracy.

  16. An Iterative Method for Problems with Multiscale Conductivity

    PubMed Central

    Kim, Hyea Hyun; Minhas, Atul S.; Woo, Eung Je

    2012-01-01

    A model with its conductivity varying highly across a very thin layer will be considered. It is related to a stable phantom model, which is invented to generate a certain apparent conductivity inside a region surrounded by a thin cylinder with holes. The thin cylinder is an insulator and both inside and outside the thin cylinderare filled with the same saline. The injected current can enter only through the holes adopted to the thin cylinder. The model has a high contrast of conductivity discontinuity across the thin cylinder and the thickness of the layer and the size of holes are very small compared to the domain of the model problem. Numerical methods for such a model require a very fine mesh near the thin layer to resolve the conductivity discontinuity. In this work, an efficient numerical method for such a model problem is proposed by employing a uniform mesh, which need not resolve the conductivity discontinuity. The discrete problem is then solved by an iterative method, where the solution is improved by solving a simple discrete problem with a uniform conductivity. At each iteration, the right-hand side is updated by integrating the previous iterate over the thin cylinder. This process results in a certain smoothing effect on microscopic structures and our discrete model can provide a more practical tool for simulating the apparent conductivity. The convergence of the iterative method is analyzed regarding the contrast in the conductivity and the relative thickness of the layer. In numerical experiments, solutions of our method are compared to reference solutions obtained from COMSOL, where very fine meshes are used to resolve the conductivity discontinuity in the model. Errors of the voltage in L2 norm follow O(h) asymptotically and the current density matches quitewell those from the reference solution for a sufficiently small mesh size h. The experimental results present a promising feature of our approach for simulating the apparent conductivity related to changes in microscopic cellular structures. PMID:23304238

  17. Distance-weighted city growth.

    PubMed

    Rybski, Diego; García Cantú Ros, Anselmo; Kropp, Jürgen P

    2013-04-01

    Urban agglomerations exhibit complex emergent features of which Zipf's law, i.e., a power-law size distribution, and fractality may be regarded as the most prominent ones. We propose a simplistic model for the generation of citylike structures which is solely based on the assumption that growth is more likely to take place close to inhabited space. The model involves one parameter which is an exponent determining how strongly the attraction decays with the distance. In addition, the model is run iteratively so that existing clusters can grow (together) and new ones can emerge. The model is capable of reproducing the size distribution and the fractality of the boundary of the largest cluster. Although the power-law distribution depends on both, the imposed exponent and the iteration, the fractality seems to be independent of the former and only depends on the latter. Analyzing land-cover data, we estimate the parameter-value γ≈2.5 for Paris and its surroundings.

  18. Radiation dose reduction in CT with adaptive statistical iterative reconstruction (ASIR) for patients with bronchial carcinoma and intrapulmonary metastases.

    PubMed

    Schäfer, M-L; Lüdemann, L; Böning, G; Kahn, J; Fuchs, S; Hamm, B; Streitparth, F

    2016-05-01

    To compare the radiation dose and image quality of 64-row chest computed tomography (CT) in patients with bronchial carcinoma or intrapulmonary metastases using full-dose CT reconstructed with filtered back projection (FBP) at baseline and reduced dose with 40% adaptive statistical iterative reconstruction (ASIR) at follow-up. The chest CT images of patients who underwent FBP and ASIR studies were reviewed. Dose-length products (DLP), effective dose, and size-specific dose estimates (SSDEs) were obtained. Image quality was analysed quantitatively by signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) measurement. In addition, image quality was assessed by two blinded radiologists evaluating images for noise, contrast, artefacts, visibility of small structures, and diagnostic acceptability using a five-point scale. The ASIR studies showed 36% reduction in effective dose compared with the FBP studies. The qualitative and quantitative image quality was good to excellent in both protocols, without significant differences. There were also no significant differences for SNR except for the SNR of lung surrounding the tumour (FBP: 35±17, ASIR: 39±22). A protocol with 40% ASIR can provide approximately 36% dose reduction in chest CT of patients with bronchial carcinoma or intrapulmonary metastases while maintaining excellent image quality. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  19. Application of a GPU-Assisted Maxwell Code to Electromagnetic Wave Propagation in ITER

    NASA Astrophysics Data System (ADS)

    Kubota, S.; Peebles, W. A.; Woodbury, D.; Johnson, I.; Zolfaghari, A.

    2014-10-01

    The Low Field Side Reflectometer (LSFR) on ITER is envisioned to provide capabilities for electron density profile and fluctuations measurements in both the plasma core and edge. The current design for the Equatorial Port Plug 11 (EPP11) employs seven monostatic antennas for use with both fixed-frequency and swept-frequency systems. The present work examines the characteristics of this layout using the 3-D version of the GPU-Assisted Maxwell Code (GAMC-3D). Previous studies in this area were performed with either 2-D full wave codes or 3-D ray- and beam-tracing. GAMC-3D is based on the FDTD method and can be run with either a fixed-frequency or modulated (e.g. FMCW) source, and with either a stationary or moving target (e.g. Doppler backscattering). The code is designed to run on a single NVIDIA Tesla GPU accelerator, and utilizes a technique based on the moving window method to overcome the size limitation of the onboard memory. Effects such as beam drift, linear mode conversion, and diffraction/scattering will be examined. Comparisons will be made with beam-tracing calculations using the complex eikonal method. Supported by U.S. DoE Grants DE-FG02-99ER54527 and DE-AC02-09CH11466, and the DoE SULI Program at PPPL.

  20. A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation

    NASA Astrophysics Data System (ADS)

    Qiang, Z.; Zeng, L.; Wu, L.

    2016-12-01

    Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.

  1. Parallel iterative solution for h and p approximations of the shallow water equations

    USGS Publications Warehouse

    Barragy, E.J.; Walters, R.A.

    1998-01-01

    A p finite element scheme and parallel iterative solver are introduced for a modified form of the shallow water equations. The governing equations are the three-dimensional shallow water equations. After a harmonic decomposition in time and rearrangement, the resulting equations are a complex Helmholz problem for surface elevation, and a complex momentum equation for the horizontal velocity. Both equations are nonlinear and the resulting system is solved using the Picard iteration combined with a preconditioned biconjugate gradient (PBCG) method for the linearized subproblems. A subdomain-based parallel preconditioner is developed which uses incomplete LU factorization with thresholding (ILUT) methods within subdomains, overlapping ILUT factorizations for subdomain boundaries and under-relaxed iteration for the resulting block system. The method builds on techniques successfully applied to linear elements by introducing ordering and condensation techniques to handle uniform p refinement. The combined methods show good performance for a range of p (element order), h (element size), and N (number of processors). Performance and scalability results are presented for a field scale problem where up to 512 processors are used. ?? 1998 Elsevier Science Ltd. All rights reserved.

  2. Iterative divergent/convergent doubling approach to linear conjugated oligomers. A rapid route to a 128 A long potential molecular wire

    NASA Astrophysics Data System (ADS)

    Tour, James M.; Schumm, Jeffrey S.; Pearson, Darren L.

    1994-06-01

    Described is the synthesis of oligo (2-ethylphenylene ethynylene)s and oligo (2-(3'ethylheptyl) phenylene ethynylene)s via an iterative divergent convergent approach. Synthesized were the monomer, dimer, tetramer, and octamer of the ethyl derivative and the monomer, dimer, tetramer, octamer, and 16-mer of the ethylheptyl derivative. The 16-mer is 128 A long. At each stage in the iteration, the length of the framework doubles. Only three sets of reaction conditions are needed for the entire iterative synthetic sequence; an iodination, a protodesilylation, and a Pd/Cu-catalyzed cross coupling. The oligomers were characterized spectroscopically and by mass spectrometry. The optical properties are presented which show the stage of optical absorbance saturation. The size exclusion chromatography values for the number average weights, relative to polystyrene, illustrate the tremendous differences in the hydrodynamic volume of these rigid rod oligomers verses the random coils of polystyrene. These differences become quite apparent at the octamer stage. These oligomers may act as molecular wires in molecular electronic devices and they also serve as useful models for understanding related bulk polymers.

  3. Optimized Deconvolution for Maximum Axial Resolution in Three-Dimensional Aberration-Corrected Scanning Transmission Electron Microscopy

    PubMed Central

    Ramachandra, Ranjan; de Jonge, Niels

    2012-01-01

    Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090

  4. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    NASA Astrophysics Data System (ADS)

    Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; Ng, Esmond G.; Maris, Pieter; Vary, James P.

    2018-01-01

    We describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. The use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. We also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.

  5. Individualized statistical learning from medical image databases: application to identification of brain lesions.

    PubMed

    Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos

    2014-04-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Individualized Statistical Learning from Medical Image Databases: Application to Identification of Brain Lesions

    PubMed Central

    Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos

    2014-01-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564

  7. Evaluation of reconstruction techniques in regional cerebral blood flow SPECT using trade-off plots: a Monte Carlo study.

    PubMed

    Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha

    2007-09-01

    The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.

  8. Road detection in SAR images using a tensor voting algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Dajiang; Hu, Chun; Yang, Bing; Tian, Jinwen; Liu, Jian

    2007-11-01

    In this paper, the problem of the detection of road networks in Synthetic Aperture Radar (SAR) images is addressed. Most of the previous methods extract the road by detecting lines and network reconstruction. Traditional algorithms such as MRFs, GA, Level Set, used in the progress of reconstruction are iterative. The tensor voting methodology we proposed is non-iterative, and non-sensitive to initialization. Furthermore, the only free parameter is the size of the neighborhood, related to the scale. The algorithm we present is verified to be effective when it's applied to the road extraction using the real Radarsat Image.

  9. TU-F-18A-06: Dual Energy CT Using One Full Scan and a Second Scan with Very Few Projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: The conventional dual energy CT (DECT) requires two full CT scans at different energy levels, resulting in dose increase as well as imaging errors from patient motion between the two scans. To shorten the scan time of DECT and thus overcome these drawbacks, we propose a new DECT algorithm using one full scan and a second scan with very few projections by preserving structural information. Methods: We first reconstruct a CT image on the full scan using a standard filtered-backprojection (FBP) algorithm. We then use a compressed sensing (CS) based iterative algorithm on the second scan for reconstruction frommore » very few projections. The edges extracted from the first scan are used as weights in the Objectives: function of the CS-based reconstruction to substantially improve the image quality of CT reconstruction. The basis material images are then obtained by an iterative image-domain decomposition method and an electron density map is finally calculated. The proposed method is evaluated on phantoms. Results: On the Catphan 600 phantom, the CT reconstruction mean error using the proposed method on 20 and 5 projections are 4.76% and 5.02%, respectively. Compared with conventional iterative reconstruction, the proposed edge weighting preserves object structures and achieves a better spatial resolution. With basis materials of Iodine and Teflon, our method on 20 projections obtains similar quality of decomposed material images compared with FBP on a full scan and the mean error of electron density in the selected regions of interest is 0.29%. Conclusion: We propose an effective method for reducing projections and therefore scan time in DECT. We show that a full scan plus a 20-projection scan are sufficient to provide DECT images and electron density with similar quality compared with two full scans. Our future work includes more phantom studies to validate the performance of our method.« less

  10. Surface heat loads on the ITER divertor vertical targets

    NASA Astrophysics Data System (ADS)

    Gunn, J. P.; Carpentier-Chouchana, S.; Escourbiac, F.; Hirai, T.; Panayotis, S.; Pitts, R. A.; Corre, Y.; Dejarnac, R.; Firdaouss, M.; Kočan, M.; Komm, M.; Kukushkin, A.; Languille, P.; Missirlian, M.; Zhao, W.; Zhong, G.

    2017-04-01

    The heating of tungsten monoblocks at the ITER divertor vertical targets is calculated using the heat flux predicted by three-dimensional ion orbit modelling. The monoblocks are beveled to a depth of 0.5 mm in the toroidal direction to provide magnetic shadowing of the poloidal leading edges within the range of specified assembly tolerances, but this increases the magnetic field incidence angle resulting in a reduction of toroidal wetted fraction and concentration of the local heat flux to the unshadowed surfaces. This shaping solution successfully protects the leading edges from inter-ELM heat loads, but at the expense of (1) temperatures on the main loaded surface that could exceed the tungsten recrystallization temperature in the nominal partially detached regime, and (2) melting and loss of margin against critical heat flux during transient loss of detachment control. During ELMs, the risk of monoblock edge melting is found to be greater than the risk of full surface melting on the plasma-wetted zone. Full surface and edge melting will be triggered by uncontrolled ELMs in the burning plasma phase of ITER operation if current models of the likely ELM ion impact energies at the divertor targets are correct. During uncontrolled ELMs in pre-nuclear deuterium or helium plasmas at half the nominal plasma current and magnetic field, full surface melting should be avoided, but edge melting is predicted.

  11. Optimization studies of the ITER low field side reflectometer.

    PubMed

    Diem, S J; Wilgen, J B; Bigelow, T S; Hanson, G R; Harvey, R W; Smirnov, A P

    2010-10-01

    Microwave reflectometry will be used on ITER to measure the electron density profile, density fluctuations due to MHD/turbulence, edge localized mode (ELM) density transients, and as an L-H transition monitor. The ITER low field side reflectometer system will measure both core and edge quantities using multiple antenna arrays spanning frequency ranges of 15-155 GHz for the O-mode system and 55-220 GHz for the X-mode system. Optimization studies using the GENRAY ray-tracing code have been done for edge and core measurements. The reflectometer launchers will utilize the HE11 mode launched from circular corrugated waveguide. The launched beams are assumed to be Gaussian with a beam waist diameter of 0.643 times the waveguide diameter. Optimum launcher size and placement are investigated by computing the antenna coupling between launchers, assuming the launched and received beams have a Gaussian beam pattern.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, J. X., E-mail: jsliu9@berkeley.edu; Milbourne, T.; Bitter, M.

    The implementation of advanced electron cyclotron emission imaging (ECEI) systems on tokamak experiments has revolutionized the diagnosis of magnetohydrodynamic (MHD) activities and improved our understanding of instabilities, which lead to disruptions. It is therefore desirable to have an ECEI system on the ITER tokamak. However, the large size of optical components in presently used ECEI systems have, up to now, precluded the implementation of an ECEI system on ITER. This paper describes a new optical ECEI concept that employs a single spherical mirror as the only optical component and exploits the astigmatism of such a mirror to produce an imagemore » with one-dimensional spatial resolution on the detector. Since this alternative approach would only require a thin slit as the viewing port to the plasma, it would make the implementation of an ECEI system on ITER feasible. The results obtained from proof-of-principle experiments with a 125 GHz microwave system are presented.« less

  13. Estimation of the dust production rate from the tungsten armour after repetitive ELM-like heat loads

    NASA Astrophysics Data System (ADS)

    Pestchanyi, S.; Garkusha, I.; Makhlaj, V.; Landman, I.

    2011-12-01

    Experimental simulations for the erosion rate of tungsten targets under ITER edge-localized mode (ELM)-like surface heat loads of 0.75 MJ m-2 causing surface melting and of 0.45 MJ m-2 without melting have been performed in the QSPA-Kh50 plasma accelerator. Analytical considerations allow us to conclude that for both energy deposition values the erosion mechanism is solid dust ejection during surface cracking under the action of thermo-stress. Tungsten influx into the ITER containment of NW~5×1018 W per medium size ELM of 0.75 MJ m-2 and 0.25 ms time duration has been estimated. The radiation cooling power of Prad=150-300 MW due to such influx of tungsten is intolerable: it should cool the ITER core to 1 keV within a few seconds.

  14. Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.

    PubMed

    Junker, André; Brenner, Karl-Heinz

    2018-03-01

    The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.

  15. The effects of iterative reconstruction in CT on low-contrast liver lesion volumetry: a phantom study

    NASA Astrophysics Data System (ADS)

    Li, Qin; Berman, Benjamin P.; Schumacher, Justin; Liang, Yongguang; Gavrielides, Marios A.; Yang, Hao; Zhao, Binsheng; Petrick, Nicholas

    2017-03-01

    Tumor volume measured from computed tomography images is considered a biomarker for disease progression or treatment response. The estimation of the tumor volume depends on the imaging system parameters selected, as well as lesion characteristics. In this study, we examined how different image reconstruction methods affect the measurement of lesions in an anthropomorphic liver phantom with a non-uniform background. Iterative statistics-based and model-based reconstructions, as well as filtered back-projection, were evaluated and compared in this study. Statistics-based and filtered back-projection yielded similar estimation performance, while model-based yielded higher precision but lower accuracy in the case of small lesions. Iterative reconstructions exhibited higher signal-to-noise ratio but slightly lower contrast of the lesion relative to the background. A better understanding of lesion volumetry performance as a function of acquisition parameters and lesion characteristics can lead to its incorporation as a routine sizing tool.

  16. MSFC Advanced Concepts Office and the Iterative Launch Vehicle Concept Method

    NASA Technical Reports Server (NTRS)

    Creech, Dennis

    2011-01-01

    This slide presentation reviews the work of the Advanced Concepts Office (ACO) at Marshall Space Flight Center (MSFC) with particular emphasis on the method used to model launch vehicles using INTegrated ROcket Sizing (INTROS), a modeling system that assists in establishing the launch concept design, and stage sizing, and facilitates the integration of exterior analytic efforts, vehicle architecture studies, and technology and system trades and parameter sensitivities.

  17. Simulant Development for LAWPS Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, Renee L.; Schonewill, Philip P.; Burns, Carolyn A.

    2017-05-23

    This report describes simulant development work that was conducted to support the technology maturation of the LAWPS facility. Desired simulant physical properties (density, viscosity, solids concentration, solid particle size), sodium concentrations, and general anion identifications were provided by WRPS. The simulant recipes, particularly a “nominal” 5.6M Na simulant, are intended to be tested at several scales, ranging from bench-scale (500 mL) to full-scale. Each simulant formulation was selected to be chemically representative of the waste streams anticipated to be fed to the LAWPS system, and used the current version of the LAWPS waste specification as a formulation basis. After simulantmore » development iterations, four simulants of varying sodium concentration (5.6M, 6.0M, 4.0M, and 8.0M) were prepared and characterized. The formulation basis, development testing, and final simulant recipes and characterization data for these four simulants are presented in this report.« less

  18. Fast iterative solution of the Bethe-Salpeter eigenvalue problem using low-rank and QTT tensor approximation

    NASA Astrophysics Data System (ADS)

    Benner, Peter; Dolgov, Sergey; Khoromskaia, Venera; Khoromskij, Boris N.

    2017-04-01

    In this paper, we propose and study two approaches to approximate the solution of the Bethe-Salpeter equation (BSE) by using structured iterative eigenvalue solvers. Both approaches are based on the reduced basis method and low-rank factorizations of the generating matrices. We also propose to represent the static screen interaction part in the BSE matrix by a small active sub-block, with a size balancing the storage for rank-structured representations of other matrix blocks. We demonstrate by various numerical tests that the combination of the diagonal plus low-rank plus reduced-block approximation exhibits higher precision with low numerical cost, providing as well a distinct two-sided error estimate for the smallest eigenvalues of the Bethe-Salpeter operator. The complexity is reduced to O (Nb2) in the size of the atomic orbitals basis set, Nb, instead of the practically intractable O (Nb6) scaling for the direct diagonalization. In the second approach, we apply the quantized-TT (QTT) tensor representation to both, the long eigenvectors and the column vectors in the rank-structured BSE matrix blocks, and combine this with the ALS-type iteration in block QTT format. The QTT-rank of the matrix entities possesses almost the same magnitude as the number of occupied orbitals in the molecular systems, No

  19. Comparison of computational to human observer detection for evaluation of CT low dose iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Eck, Brendan; Fahmi, Rachid; Brown, Kevin M.; Raihani, Nilgoun; Wilson, David L.

    2014-03-01

    Model observers were created and compared to human observers for the detection of low contrast targets in computed tomography (CT) images reconstructed with an advanced, knowledge-based, iterative image reconstruction method for low x-ray dose imaging. A 5-channel Laguerre-Gauss Hotelling Observer (CHO) was used with internal noise added to the decision variable (DV) and/or channel outputs (CO). Models were defined by parameters: (k1) DV-noise with standard deviation (std) proportional to DV std; (k2) DV-noise with constant std; (k3) CO-noise with constant std across channels; and (k4) CO-noise in each channel with std proportional to CO variance. Four-alternative forced choice (4AFC) human observer studies were performed on sub-images extracted from phantom images with and without a "pin" target. Model parameters were estimated using maximum likelihood comparison to human probability correct (PC) data. PC in human and all model observers increased with dose, contrast, and size, and was much higher for advanced iterative reconstruction (IMR) as compared to filtered back projection (FBP). Detection in IMR was better than FPB at 1/3 dose, suggesting significant dose savings. Model(k1,k2,k3,k4) gave the best overall fit to humans across independent variables (dose, size, contrast, and reconstruction) at fixed display window. However Model(k1) performed better when considering model complexity using the Akaike information criterion. Model(k1) fit the extraordinary detectability difference between IMR and FBP, despite the different noise quality. It is anticipated that the model observer will predict results from iterative reconstruction methods having similar noise characteristics, enabling rapid comparison of methods.

  20. Method of fabricating a whispering gallery mode resonator

    NASA Technical Reports Server (NTRS)

    Savchenkov, Anatoliy A. (Inventor); Matkso, Andrey B. (Inventor); Iltchenko, Vladimir S. (Inventor); Maleki, Lute (Inventor)

    2011-01-01

    A method of fabricating a whispering gallery mode resonator (WGMR) is provided. The WGMR can be fabricated from a particular material, annealed, and then polished. The WGMR can be repeatedly annealed and then polished. The repeated polishing of the WGMR can be carried out using an abrasive slurry. The abrasive slurry can have a predetermined, constant grain size. Each subsequent polishing of the WGMR can use an abrasive slurry having a grain size that is smaller than the grain size of the abrasive slurry of the previous polishing iteration.

  1. Detailed studies of full-size ATLAS12 sensors

    NASA Astrophysics Data System (ADS)

    Hommels, L. B. A.; Allport, P. P.; Baca, M.; Broughton, J.; Chisholm, A.; Nikolopoulos, K.; Pyatt, S.; Thomas, J. P.; Wilson, J. A.; Kierstead, J.; Kuczewski, P.; Lynn, D.; Arratia, M.; Klein, C. T.; Ullan, M.; Fleta, C.; Fernandez-Tejero, J.; Bloch, I.; Gregor, I. M.; Lohwasser, K.; Poley, L.; Tackmann, K.; Trofimov, A.; Yildirim, E.; Hauser, M.; Jakobs, K.; Kuehn, S.; Mahboubi, K.; Mori, R.; Parzefall, U.; Clark, A.; Ferrere, D.; Gonzalez Sevilla, S.; Ashby, J.; Blue, A.; Bates, R.; Buttar, C.; Doherty, F.; McMullen, T.; McEwan, F.; O`Shea, V.; Kamada, S.; Yamamura, K.; Ikegami, Y.; Nakamura, K.; Takubo, Y.; Unno, Y.; Takashima, R.; Chilingarov, A.; Fox, H.; Affolder, A. A.; Casse, G.; Dervan, P.; Forshaw, D.; Greenall, A.; Wonsak, S.; Wormald, M.; Cindro, V.; Kramberger, G.; Mandić, I.; Mikuž, M.; Gorelov, I.; Hoeferkamp, M.; Palni, P.; Seidel, S.; Taylor, A.; Toms, K.; Wang, R.; Hessey, N. P.; Valencic, N.; Hanagaki, K.; Dolezal, Z.; Kodys, P.; Bohm, J.; Stastny, J.; Mikestikova, M.; Bevan, A.; Beck, G.; Milke, C.; Domingo, M.; Fadeyev, V.; Galloway, Z.; Hibbard-Lubow, D.; Liang, Z.; Sadrozinski, H. F.-W.; Seiden, A.; To, K.; French, R.; Hodgson, P.; Marin-Reyes, H.; Parker, K.; Jinnouchi, O.; Hara, K.; Sato, K.; Sato, K.; Hagihara, M.; Iwabuchi, S.; Bernabeu, J.; Civera, J. V.; Garcia, C.; Lacasta, C.; Marti i Garcia, S.; Rodriguez, D.; Santoyo, D.; Solaz, C.; Soldevila, U.

    2016-09-01

    The "ATLAS ITk Strip Sensor Collaboration" R&D group has developed a second iteration of single-sided n+-in-p type micro-strip sensors for use in the tracker upgrade of the ATLAS experiment at the High-Luminosity (HL) LHC. The full size sensors measure approximately 97 × 97mm2 and are designed for tolerance against the 1.1 ×1015neq /cm2 fluence expected at the HL-LHC. Each sensor has 4 columns of 1280 individual 23.9 mm long channels, arranged at 74.5 μm pitch. Four batches comprising 120 sensors produced by Hamamatsu Photonics were evaluated for their mechanical, and electrical bulk and strip characteristics. Optical microscopy measurements were performed to obtain the sensor surface profile. Leakage current and bulk capacitance properties were measured for each individual sensor. For sample strips across the sensor batches, the inter-strip capacitance and resistance as well as properties of the punch-through protection structure were measured. A multi-channel probecard was used to measure leakage current, coupling capacitance and bias resistance for each individual channel of 100 sensors in three batches. The compiled results for 120 unirradiated sensors are presented in this paper, including summary results for almost 500,000 strips probed. Results on the reverse bias voltage dependence of various parameters and frequency dependence of tested capacitances are included for validation of the experimental methods used. Comparing results with specified values, almost all sensors fall well within specification.

  2. Three-Dimensional BEM and FEM Submodelling in a Cracked FML Full Scale Aeronautic Panel

    NASA Astrophysics Data System (ADS)

    Citarella, R.; Cricrì, G.

    2014-06-01

    This paper concerns the numerical characterization of the fatigue strength of a flat stiffened panel, designed as a fiber metal laminate (FML) and made of Aluminum alloy and Fiber Glass FRP. The panel is full scale and was tested (in a previous work) under fatigue biaxial loads, applied by means of a multi-axial fatigue machine: an initial through the thickness notch was created in the panel and the aforementioned biaxial fatigue load applied, causing a crack initiation and propagation in the Aluminum layers. Moreover, (still in a previous work), the fatigue test was simulated by the Dual Boundary Element Method (DBEM) in a bidimensional approach. Now, in order to validate the assumptions made in the aforementioned DBEM approach and concerning the delamination area size and the fiber integrity during crack propagation, three-dimensional BEM and FEM submodelling analyses are realized. Due to the lack of experimental data on the delamination area size (normally increasing as the crack propagates), such area is calculated by iterative three-dimensional BEM or FEM analyses, considering the inter-laminar stresses and a delamination criterion. Such three-dimensional analyses, but in particular the FEM proposed model, can also provide insights into the fiber rupture problem. These DBEM-BEM or DBEM-FEM approaches aims at providing a general purpose evaluation tool for a better understanding of the fatigue resistance of FML panels, providing a deeper insight into the role of fiber stiffness and of delamination extension on the stress intensity factors.

  3. A new iterative triclass thresholding technique in image segmentation.

    PubMed

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  4. Global Asymptotic Behavior of Iterative Implicit Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1994-01-01

    The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.

  5. Application of a dual-resolution voxelization scheme to compressed-sensing (CS)-based iterative reconstruction in digital tomosynthesis (DTS)

    NASA Astrophysics Data System (ADS)

    Park, S. Y.; Kim, G. A.; Cho, H. S.; Park, C. K.; Lee, D. Y.; Lim, H. W.; Lee, H. W.; Kim, K. S.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Je, U. K.; Woo, T. H.; Oh, J. E.

    2018-02-01

    In recent digital tomosynthesis (DTS), iterative reconstruction methods are often used owing to the potential to provide multiplanar images of superior image quality to conventional filtered-backprojection (FBP)-based methods. However, they require enormous computational cost in the iterative process, which has still been an obstacle to put them to practical use. In this work, we propose a new DTS reconstruction method incorporated with a dual-resolution voxelization scheme in attempt to overcome these difficulties, in which the voxels outside a small region-of-interest (ROI) containing target diagnosis are binned by 2 × 2 × 2 while the voxels inside the ROI remain unbinned. We considered a compressed-sensing (CS)-based iterative algorithm with a dual-constraint strategy for more accurate DTS reconstruction. We implemented the proposed algorithm and performed a systematic simulation and experiment to demonstrate its viability. Our results indicate that the proposed method seems to be effective for reducing computational cost considerably in iterative DTS reconstruction, keeping the image quality inside the ROI not much degraded. A binning size of 2 × 2 × 2 required only about 31.9% computational memory and about 2.6% reconstruction time, compared to those for no binning case. The reconstruction quality was evaluated in terms of the root-mean-square error (RMSE), the contrast-to-noise ratio (CNR), and the universal-quality index (UQI).

  6. Effect of time-of-flight and point spread function modeling on detectability of myocardial defects in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefferkoetter, Joshua, E-mail: dnrjds@nus.edu.sg; Ouyang, Jinsong; Rakvongthai, Yothin

    2014-06-15

    Purpose: A study was designed to investigate the impact of time-of-flight (TOF) and point spread function (PSF) modeling on the detectability of myocardial defects. Methods: Clinical FDG-PET data were used to generate populations of defect-present and defect-absent images. Defects were incorporated at three contrast levels, and images were reconstructed by ordered subset expectation maximization (OSEM) iterative methods including ordinary Poisson, alone and with PSF, TOF, and PSF+TOF. Channelized Hotelling observer signal-to-noise ratio (SNR) was the surrogate for human observer performance. Results: For three iterations, 12 subsets, and no postreconstruction smoothing, TOF improved overall defect detection SNR by 8.6% as comparedmore » to its non-TOF counterpart for all the defect contrasts. Due to the slow convergence of PSF reconstruction, PSF yielded 4.4% less SNR than non-PSF. For reconstruction parameters (iteration number and postreconstruction smoothing kernel size) optimizing observer SNR, PSF showed larger improvement for faint defects. The combination of TOF and PSF improved mean detection SNR as compared to non-TOF and non-PSF counterparts by 3.0% and 3.2%, respectively. Conclusions: For typical reconstruction protocol used in clinical practice, i.e., less than five iterations, TOF improved defect detectability. In contrast, PSF generally yielded less detectability. For large number of iterations, TOF+PSF yields the best observer performance.« less

  7. Efficient Iterative Methods Applied to the Solution of Transonic Flows

    NASA Astrophysics Data System (ADS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Chronopoulos, Anthony T.

    1996-02-01

    We investigate the use of an inexact Newton's method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton's method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GMRES method. The preconditioner is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton-GMRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems.

  8. AORSA full wave calculations of helicon waves in DIII-D and ITER

    NASA Astrophysics Data System (ADS)

    Lau, C.; Jaeger, E. F.; Bertelli, N.; Berry, L. A.; Green, D. L.; Murakami, M.; Park, J. M.; Pinsker, R. I.; Prater, R.

    2018-06-01

    Helicon waves have been recently proposed as an off-axis current drive actuator for DIII-D, FNSF, and DEMO tokamaks. Previous ray tracing modeling using GENRAY predicts strong single pass absorption and current drive in the mid-radius region on DIII-D in high beta tokamak discharges. The full wave code AORSA, which is valid to all order of Larmor radius and can resolve arbitrary ion cyclotron harmonics, has been used to validate the ray tracing technique. If the scrape-off-layer (SOL) is ignored in the modeling, AORSA agrees with GENRAY in both the amplitude and location of driven current for DIII-D and ITER cases. These models also show that helicon current drive can possibly be an efficient current drive actuator for ITER. Previous GENRAY analysis did not include the SOL. AORSA has also been used to extend the simulations to include the SOL and to estimate possible power losses of helicon waves in the SOL. AORSA calculations show that another mode can propagate in the SOL and lead to significant (~10%–20%) SOL losses at high SOL densities. Optimizing the SOL density profile can reduce these SOL losses to a few percent.

  9. AORSA full wave calculations of helicon waves in DIII-D and ITER

    DOE PAGES

    Lau, Cornwall; Jaeger, E.F.; Bertelli, Nicola; ...

    2018-04-11

    Helicon waves have been recently proposed as an off-axis current drive actuator for DIII-D, FNSF, and DEMO tokamaks. Previous ray tracing modeling using GENRAY predicts strong single pass absorption and current drive in the mid-radius region on DIII-D in high beta tokamak discharges. The full wave code AORSA, which is valid to all order of Larmor radius and can resolve arbitrary ion cyclotron harmonics, has been used to validate the ray tracing technique. If the scrape-off-layer (SOL) is ignored in the modeling, AORSA agrees with GENRAY in both the amplitude and location of driven current for DIII-D and ITER cases.more » These models also show that helicon current drive can possibly be an efficient current drive actuator for ITER. Previous GENRAY analysis did not include the SOL. AORSA has also been used to extend the simulations to include the SOL and to estimate possible power losses of helicon waves in the SOL. AORSA calculations show that another mode can propagate in the SOL and lead to significant (~10-20%) SOL losses at high SOL densities. Optimizing the SOL density profile can reduce these SOL losses to a few percent.« less

  10. AORSA full wave calculations of helicon waves in DIII-D and ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lau, Cornwall; Jaeger, E.F.; Bertelli, Nicola

    Helicon waves have been recently proposed as an off-axis current drive actuator for DIII-D, FNSF, and DEMO tokamaks. Previous ray tracing modeling using GENRAY predicts strong single pass absorption and current drive in the mid-radius region on DIII-D in high beta tokamak discharges. The full wave code AORSA, which is valid to all order of Larmor radius and can resolve arbitrary ion cyclotron harmonics, has been used to validate the ray tracing technique. If the scrape-off-layer (SOL) is ignored in the modeling, AORSA agrees with GENRAY in both the amplitude and location of driven current for DIII-D and ITER cases.more » These models also show that helicon current drive can possibly be an efficient current drive actuator for ITER. Previous GENRAY analysis did not include the SOL. AORSA has also been used to extend the simulations to include the SOL and to estimate possible power losses of helicon waves in the SOL. AORSA calculations show that another mode can propagate in the SOL and lead to significant (~10-20%) SOL losses at high SOL densities. Optimizing the SOL density profile can reduce these SOL losses to a few percent.« less

  11. Interactive computer graphics system for structural sizing and analysis of aircraft structures

    NASA Technical Reports Server (NTRS)

    Bendavid, D.; Pipano, A.; Raibstein, A.; Somekh, E.

    1975-01-01

    A computerized system for preliminary sizing and analysis of aircraft wing and fuselage structures was described. The system is based upon repeated application of analytical program modules, which are interactively interfaced and sequence-controlled during the iterative design process with the aid of design-oriented graphics software modules. The entire process is initiated and controlled via low-cost interactive graphics terminals driven by a remote computer in a time-sharing mode.

  12. VRF ("Visual RobFit") — nuclear spectral analysis with non-linear full-spectrum nuclide shape fitting

    NASA Astrophysics Data System (ADS)

    Lasche, George; Coldwell, Robert; Metzger, Robert

    2017-09-01

    A new application (known as "VRF", or "Visual RobFit") for analysis of high-resolution gamma-ray spectra has been developed using non-linear fitting techniques to fit full-spectrum nuclide shapes. In contrast to conventional methods based on the results of an initial peak-search, the VRF analysis method forms, at each of many automated iterations, a spectrum-wide shape for each nuclide and, also at each iteration, it adjusts the activities of each nuclide, as well as user-enabled parameters of energy calibration, attenuation by up to three intervening or self-absorbing materials, peak width as a function of energy, full-energy peak efficiency, and coincidence summing until no better fit to the data can be obtained. This approach, which employs a new and significantly advanced underlying fitting engine especially adapted to nuclear spectra, allows identification of minor peaks that are masked by larger, overlapping peaks that would not otherwise be possible. The application and method are briefly described and two examples are presented.

  13. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    DOE PAGES

    Shao, Meiyue; Aktulga, H.  Metin; Yang, Chao; ...

    2017-09-14

    In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less

  14. A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems.

    PubMed

    Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping

    2013-01-01

    Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

  15. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Meiyue; Aktulga, H.  Metin; Yang, Chao

    In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less

  16. Evaluation of power transfer efficiency for a high power inductively coupled radio-frequency hydrogen ion source

    NASA Astrophysics Data System (ADS)

    Jain, P.; Recchia, M.; Cavenago, M.; Fantz, U.; Gaio, E.; Kraus, W.; Maistrello, A.; Veltri, P.

    2018-04-01

    Neutral beam injection (NBI) for plasma heating and current drive is necessary for International Thermonuclear Experimental reactor (ITER) tokamak. Due to its various advantages, a radio frequency (RF) driven plasma source type was selected as a reference ion source for the ITER heating NBI. The ITER relevant RF negative ion sources are inductively coupled (IC) devices whose operational working frequency has been chosen to be 1 MHz and are characterized by high RF power density (˜9.4 W cm-3) and low operational pressure (around 0.3 Pa). The RF field is produced by a coil in a cylindrical chamber leading to a plasma generation followed by its expansion inside the chamber. This paper recalls different concepts based on which a methodology is developed to evaluate the efficiency of the RF power transfer to hydrogen plasma. This efficiency is then analyzed as a function of the working frequency and in dependence of other operating source and plasma parameters. The study is applied to a high power IC RF hydrogen ion source which is similar to one simplified driver of the ELISE source (half the size of the ITER NBI source).

  17. Injected mass deposition thresholds for lithium granule instigated triggering of edge localized modes on EAST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lunsford, R.; Sun, Zhen; Maingi, Rajesh

    The ability of an injected lithium granule to promptly trigger an edge localized mode (ELM) has been established in multiple experiments. By horizontally injecting granules ranging in diameter from 200 microns to 1mm in diameter into the low field side of EAST H-mode discharges we have determined that granules with diameter > 600 microns are successful in triggering ELMs more than 95% of the time. Granules were radially injected from the outer midplane with velocities ~ 80 m/s into EAST upper-single null discharges with an ITER like tungsten monoblock divertor. ELM triggering was a prompt response to granule injection, andmore » for granules of a sufficient size there was no evidence of a "trigger lag" phenomenon as observed in full metal machines. We also demonstrated that the triggering efficiency decreased with granule size during dynamic size scans. These granules were individually tracked throughout their injection cycle in order to determine their efficacy at triggering an ELM. Furthermore, by simulating the granule injection with an experimentally benchmarked neutral gas shielding (NGS) model, the ablatant mass deposition required to promptly trigger an ELM is calculated and the fractional mass deposition is determined. Simulated 900 micron granules capable of triggering an ELM show a peaked mass deposition of 3.9 x 10 17 atoms per mm of penetration at a depth of approximately 5 cm past the separatrix.« less

  18. Injected mass deposition thresholds for lithium granule instigated triggering of edge localized modes on EAST

    DOE PAGES

    Lunsford, R.; Sun, Zhen; Maingi, Rajesh; ...

    2017-12-19

    The ability of an injected lithium granule to promptly trigger an edge localized mode (ELM) has been established in multiple experiments. By horizontally injecting granules ranging in diameter from 200 microns to 1mm in diameter into the low field side of EAST H-mode discharges we have determined that granules with diameter > 600 microns are successful in triggering ELMs more than 95% of the time. Granules were radially injected from the outer midplane with velocities ~ 80 m/s into EAST upper-single null discharges with an ITER like tungsten monoblock divertor. ELM triggering was a prompt response to granule injection, andmore » for granules of a sufficient size there was no evidence of a "trigger lag" phenomenon as observed in full metal machines. We also demonstrated that the triggering efficiency decreased with granule size during dynamic size scans. These granules were individually tracked throughout their injection cycle in order to determine their efficacy at triggering an ELM. Furthermore, by simulating the granule injection with an experimentally benchmarked neutral gas shielding (NGS) model, the ablatant mass deposition required to promptly trigger an ELM is calculated and the fractional mass deposition is determined. Simulated 900 micron granules capable of triggering an ELM show a peaked mass deposition of 3.9 x 10 17 atoms per mm of penetration at a depth of approximately 5 cm past the separatrix.« less

  19. Bridging single and multireference coupled cluster theories with universal state selective formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaskaran-Nair, Kiran; Kowalski, Karol

    2013-05-28

    The universal state selective (USS) multireference approach is used to construct new energy functionals which offers a unique possibility of bridging single and multireference coupled cluster theories (SR/MRCC). These functionals, which can be used to develop iterative and non-iterative approaches, utilize a special form of the trial wavefunctions, which assure additive separability (or size-consistency) of the USS energies in the non-interacting subsystem limit. When the USS formalism is combined with approximate SRCC theories, the resulting formalism can be viewed as a size-consistent version of the method of moments of coupled cluster equations (MMCC) employing a MRCC trial wavefunction. Special casesmore » of the USS formulations, which utilize single reference state specific CC (V.V. Ivanov, D.I. Lyakh, L. Adamowicz, Phys. Chem. Chem. Phys. 11, 2355 (2009)) and tailored CC (T. Kinoshita, O. Hino, R.J. Bartlett, J. Chem. Phys. 123, 074106 (2005)) expansions are also discussed.« less

  20. Analyses of microstructure, composition and retention of hydrogen isotopes in divertor tiles of JET with the ITER-like wall

    NASA Astrophysics Data System (ADS)

    Masuzaki, S.; Tokitani, M.; Otsuka, T.; Oya, Y.; Hatano, Y.; Miyamoto, M.; Sakamoto, R.; Ashikawa, N.; Sakurada, S.; Uemura, Y.; Azuma, K.; Yumizuru, K.; Oyaizu, M.; Suzuki, T.; Kurotaki, H.; Hamaguchi, D.; Isobe, K.; Asakura, N.; Widdowson, A.; Heinola, K.; Jachmich, S.; Rubel, M.; contributors, JET

    2017-12-01

    Results of the comprehensive surface analyses of divertor tiles and dusts retrieved from JET after the first ITER-like wall campaign (2011-2012) are presented. The samples cored from the divertor tiles were analyzed. Numerous nano-size bubble-like structures were observed in the deposition layer on the apron of the inner divertor tile, and a beryllium dust with the same structures were found in the matter collected from the inner divertor after the campaign. This suggests that the nano-size bubble-like structures can make the deposition layer to become brittle and may lead to cracking followed by dust generation. X-ray photoelectron spectroscopy analyses of chemical states of species in the deposition layers identified the formation of beryllium-tungsten intermetallic compounds on an inner vertical tile. Different tritium retention profiles along the divertor tiles were observed at the top surfaces and at deeper regions of the tiles by using the imaging plate technique.

  1. THERMAL DESIGN OF THE ITER VACUUM VESSEL COOLING SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbajo, Juan J; Yoder Jr, Graydon L; Kim, Seokho H

    RELAP5-3D models of the ITER Vacuum Vessel (VV) Primary Heat Transfer System (PHTS) have been developed. The design of the cooling system is described in detail, and RELAP5 results are presented. Two parallel pump/heat exchanger trains comprise the design one train is for full-power operation and the other is for emergency operation or operation at decay heat levels. All the components are located inside the Tokamak building (a significant change from the original configurations). The results presented include operation at full power, decay heat operation, and baking operation. The RELAP5-3D results confirm that the design can operate satisfactorily during bothmore » normal pulsed power operation and decay heat operation. All the temperatures in the coolant and in the different system components are maintained within acceptable operating limits.« less

  2. Microwave beam broadening due to turbulent plasma density fluctuations within the limit of the Born approximation and beyond

    NASA Astrophysics Data System (ADS)

    Köhn, A.; Guidi, L.; Holzhauer, E.; Maj, O.; Poli, E.; Snicker, A.; Weber, H.

    2018-07-01

    Plasma turbulence, and edge density fluctuations in particular, can under certain conditions broaden the cross-section of injected microwave beams significantly. This can be a severe problem for applications relying on well-localized deposition of the microwave power, like the control of MHD instabilities. Here we investigate this broadening mechanism as a function of fluctuation level, background density and propagation length in a fusion-relevant scenario using two numerical codes, the full-wave code IPF-FDMC and the novel wave kinetic equation solver WKBeam. The latter treats the effects of fluctuations using a statistical approach, based on an iterative solution of the scattering problem (Born approximation). The full-wave simulations are used to benchmark this approach. The Born approximation is shown to be valid over a large parameter range, including ITER-relevant scenarios.

  3. A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data.

    PubMed

    Liang, Faming; Kim, Jinsu; Song, Qifan

    2016-01-01

    Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively.

  4. A Bootstrap Metropolis–Hastings Algorithm for Bayesian Analysis of Big Data

    PubMed Central

    Kim, Jinsu; Song, Qifan

    2016-01-01

    Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively. PMID:29033469

  5. An iterative sinogram gap-filling method with object- and scanner-dedicated discrete cosine transform (DCT)-domain filters for high resolution PET scanners.

    PubMed

    Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon

    2018-01-01

    We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.

  6. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  7. A Block Preconditioned Conjugate Gradient-type Iterative Solver for Linear Systems in Thermal Reservoir Simulation

    NASA Astrophysics Data System (ADS)

    Betté, Srinivas; Diaz, Julio C.; Jines, William R.; Steihaug, Trond

    1986-11-01

    A preconditioned residual-norm-reducing iterative solver is described. Based on a truncated form of the generalized-conjugate-gradient method for nonsymmetric systems of linear equations, the iterative scheme is very effective for linear systems generated in reservoir simulation of thermal oil recovery processes. As a consequence of employing an adaptive implicit finite-difference scheme to solve the model equations, the number of variables per cell-block varies dynamically over the grid. The data structure allows for 5- and 9-point operators in the areal model, 5-point in the cross-sectional model, and 7- and 11-point operators in the three-dimensional model. Block-diagonal-scaling of the linear system, done prior to iteration, is found to have a significant effect on the rate of convergence. Block-incomplete-LU-decomposition (BILU) and block-symmetric-Gauss-Seidel (BSGS) methods, which result in no fill-in, are used as preconditioning procedures. A full factorization is done on the well terms, and the cells are ordered in a manner which minimizes the fill-in in the well-column due to this factorization. The convergence criterion for the linear (inner) iteration is linked to that of the nonlinear (Newton) iteration, thereby enhancing the efficiency of the computation. The algorithm, with both BILU and BSGS preconditioners, is evaluated in the context of a variety of thermal simulation problems. The solver is robust and can be used with little or no user intervention.

  8. Constructing Integrable High-pressure Full-current Free-boundary Stellarator Magnetohydrodynamic Equilibrium Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S.R. Hudson; D.A. Monticello; A.H. Reiman

    For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands responsible for breaking the smooth topology of the flux surfaces are guaranteed to exist. Thus, the suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Pfirsch-Schluter currents, diamagnetic currents, and resonant coil fields contribute to the formation of magnetic islands, and the challenge is to designmore » the plasma and coils such that these effects cancel. Magnetic islands in free-boundary high-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver [Reiman and Greenside, Comp. Phys. Comm. 43 (1986) 157] which iterate s the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. The changes are constrained to preserve certain measures of engineering acceptability and to preserve the stability of ideal kink modes. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible, the plasma is stable to ideal kink modes, and the coils satisfy engineering constraints. The method is applied to a candidate plasma and coil design for the National Compact Stellarator Experiment [Reiman, et al., Phys. Plasmas 8 (May 2001) 2083].« less

  9. Constructing integrable high-pressure full-current free-boundary stellarator magnetohydrodynamic equilibrium solutions

    NASA Astrophysics Data System (ADS)

    Hudson, S. R.; Monticello, D. A.; Reiman, A. H.; Strickler, D. J.; Hirshman, S. P.; Ku, L.-P.; Lazarus, E.; Brooks, A.; Zarnstorff, M. C.; Boozer, A. H.; Fu, G.-Y.; Neilson, G. H.

    2003-10-01

    For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands responsible for breaking the smooth topology of the flux surfaces are guaranteed to exist. Thus, the suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Pfirsch-Schlüter currents, diamagnetic currents and resonant coil fields contribute to the formation of magnetic islands, and the challenge is to design the plasma and coils such that these effects cancel. Magnetic islands in free-boundary high-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver (Reiman and Greenside 1986 Comput. Phys. Commun. 43 157) which iterates the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. The changes are constrained to preserve certain measures of engineering acceptability and to preserve the stability of ideal kink modes. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible, the plasma is stable to ideal kink modes, and the coils satisfy engineering constraints. The method is applied to a candidate plasma and coil design for the National Compact Stellarator eXperiment (Reiman et al 2001 Phys. Plasma 8 2083).

  10. Effects of ray profile modeling on resolution recovery in clinical CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofmann, Christian; Knaup, Michael; Kachelrieß, Marc, E-mail: marc.kachelriess@dkfz-heidelberg.de

    2014-02-15

    Purpose: Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. However, among vendors and researchers, there is no consensus of how to best achieve these goals. The authors are focusing on the aspect of geometric ray profile modeling, which is realized by some algorithms, while others model the ray as a straight line. The authors incorporate ray-modeling (RM) in nonregularized iterative reconstruction. That means, instead of using one simple single needle beam to represent the x-ray, the authors evaluatemore » the double integral of attenuation path length over the finite source distribution and the finite detector element size in the numerical forward projection. Our investigations aim at analyzing the resolution recovery (RR) effects of RM. Resolution recovery means that frequencies can be recovered beyond the resolution limit of the imaging system. In order to evaluate, whether clinical CT images can benefit from modeling the geometrical properties of each x-ray, the authors performed a 2D simulation study of a clinical CT fan-beam geometry that includes the precise modeling of these geometrical properties. Methods: All simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and a Forbild thorax phantom with circular resolution patterns representing calcifications in the heart region are simulated. An FBP reconstruction with a Ram–Lak kernel is used as a reference reconstruction. The FBP is compared to iterative reconstruction techniques with and without RM: An ordered subsets convex (OSC) algorithm without any RM (OSC), an OSC where the forward projection is modeled concerning the finite focal spot and detector size (OSC-RM) and an OSC with RM and with a matched forward and backprojection pair (OSC-T-RM, T for transpose). In all cases, noise was matched to be able to focus on comparing spatial resolution. The authors use two different simulation settings. Both are based on the geometry of a typical clinical CT system (0.7 mm detector element size at isocenter, 1024 projections per rotation). Setting one has an exaggerated source width of 5.0 mm. Setting two has a realistically small source width of 0.5 mm. The authors also investigate the transition from setting one to two. To quantify image quality, the authors analyze line profiles through the resolution patterns to define a contrast factor (CF) for contrast-resolution plots, and the authors compare the normalized cross-correlation (NCC) with respect to the ground truth of the circular resolution patterns. To independently analyze whether RM is of advantage, the authors implemented several iterative reconstruction algorithms: The statistical iterative reconstruction algorithm OSC, the ordered subsets simultaneous algebraic reconstruction technique (OSSART) and another statistical iterative reconstruction algorithm, denoted with ordered subsets maximum likelihood (OSML) algorithm. All algorithms were implemented both without RM (denoted as OSC, OSSART, and OSML) and with RM (denoted as OSC-RM, OSSART-RM, and OSML-RM). Results: For the unrealistic case of a 5.0 mm focal spot the CF can be improved by a factor of two due to RM: the 4.2 LP/cm bar pattern, which is the first bar pattern that cannot be resolved without RM, can be easily resolved with RM. For the realistic case of a 0.5 mm focus, all results show approximately the same CF. The NCC shows no significant dependency on RM when the source width is smaller than 2.0 mm (as in clinical CT). From 2.0 mm to 5.0 mm focal spot size increasing improvements can be observed with RM. Conclusions: Geometric RM in iterative reconstruction helps improving spatial resolution, if the ray cross-section is significantly larger than the ray sampling distance. In clinical CT, however, the ray is not much thicker than the distance between neighboring ray centers, as the focal spot size is small and detector crosstalk is negligible, due to reflective coatings between detector elements. Therefore,RM appears not to be necessary in clinical CT to achieve resolution recovery.« less

  11. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries.

    PubMed

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-15

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  12. A Newton–Krylov method with an approximate analytical Jacobian for implicit solution of Navier–Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    PubMed Central

    Asgharzadeh, Hafez; Borazjani, Iman

    2016-01-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172

  13. Development of a Tritium Extruder for ITER Pellet Injection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M.J. Gouge; P.W. Fisher

    As part of the International Thermonuclear Experimental Reactor (ITER) plasma fueling development program, Oak Ridge National Laboratory (ORNL) has fabricated a pellet injection system to test the mechanical and thermal properties of extruded tritium. Hydrogenic pellets will be used in ITER to sustain the fusion power in the plasma core and may be crucial in reducing first-wall tritium inventories by a process of "isotopic fueling" in which tritium-rich pellets fuel the burning plasma core and deuterium gas fuels the edge. This repeating single-stage pneumatic pellet injector, called the Tritium-Proof-of-Principle Phase II (TPOP-II) Pellet Injector, has a piston-driven mechanical extruder andmore » is designed to extrude and accelerate hydrogenic pellets sized for the ITER device. The TPOP-II program has the following development goals: evaluate the feasibility of extruding tritium and deuterium-tritium (D-T) mixtures for use in future pellet injection systems; determine the mechanical and thermal properties of tritium and D-T extrusions; integrate, test, and evaluate the extruder in a repeating, single-stage light gas gun that is sized for the ITER application (pellet diameter -7 to 8 mm); evaluate options for recycling propellant and extruder exhaust gas; and evaluate operability and reliability of ITER prototypical fueling systems in an environment of significant tritium inventory that requires secondary and room containment systems. In tests with deuterium feed at ORNL, up to 13 pellets per extrusion have been extruded at rates up to 1 Hz and accelerated to speeds of 1.0 to 1.1 km/s, using hydrogen propellant gas at a supply pressure of 65 bar. Initially, deuterium pellets 7.5 mm in diameter and 11 mm in length were produced-the largest cryogenic pellets produced by the fusion program to date. These pellets represent about a 10% density perturbation to ITER. Subsequently, the extruder nozzle was modified to produce pellets that are almost 7.5-mm right circular cylinders. Tritium and D-T pellets have been produced in experiments at the Los Alamos National Laboratory Tritium Systems Test Assembly. About 38 g of tritium have been utilized in the experiment. The tritium was received in eight batches, six from product containers and two from the Isotope Separation System. Two types of runs were made: those in which the material was only extruded and those in which pellets were produced and fired with deuterium propellant. A total of 36 TZ runs and 28 D-T runs have been made. A total of 36 pure tritium runs and 28 D-T mixture runs were made. Extrusion experiments indicate that both T2 and D-T will require higher extrusion forces than D2 by about a factor of two.« less

  14. Iterative universal state selective correction for the Brillouin-Wigner multireference coupled-cluster theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banik, Subrata; Ravichandran, Lalitha; Brabec, Jiri

    2015-03-21

    As a further development of the previously introduced a posteriori Universal State-Selective (USS) corrections [K. Kowalski, J. Chem. Phys. 134, 194107 (2011)] and [Brabec et al., J. Chem. Phys., 136, 124102 (2012)], we suggest an iterative form of the USS correction by means of correcting effective Hamiltonian matrix elements. We also formulate USS corrections via the left Bloch equations. The convergence of the USS corrections with excitation level towards the FCI limit is also investigated. Various forms of the USS and simplified diagonal USSD corrections at the SD and SD(T) levels are numerically assessed on several model systems and onmore » the ozone and tetramethyleneethane molecules. It is shown that the iterative USS correction can successfully replace the previously developed a posteriori BWCC size-extensivity correction, while it is not sensitive to intruder states and performs well also in other cases when the a posteriori one fails, like e.g. for the asymmetric vibration mode of ozone.« less

  15. Modelling of steady state erosion of CFC actively water-cooled mock-up for the ITER divertor

    NASA Astrophysics Data System (ADS)

    Ogorodnikova, O. V.

    2008-04-01

    Calculations of the physical and chemical erosion of CFC (carbon fibre composite) monoblocks as outer vertical target of the ITER divertor during normal operation regimes have been done. Off-normal events and ELM's are not considered here. For a set of components under thermal and particles loads at glancing incident angle, variations in the material properties and/or assembly of defects could result in different erosion of actively-cooled components and, thus, in temperature instabilities. Operation regimes where the temperature instability takes place are investigated. It is shown that the temperature and erosion instabilities, probably, are not a critical point for the present design of ITER vertical target if a realistic variation of material properties is assumed, namely, the difference in the thermal conductivities of the neighbouring monoblocks is 20% and the maximum allowable size of a defect between CFC armour and cooling tube is +/-90° in circumferential direction from the apex.

  16. Fast sweeping method for the factored eikonal equation

    NASA Astrophysics Data System (ADS)

    Fomel, Sergey; Luo, Songting; Zhao, Hongkai

    2009-09-01

    We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.

  17. Application of Four-Point Newton-EGSOR iteration for the numerical solution of 2D Porous Medium Equations

    NASA Astrophysics Data System (ADS)

    Chew, J. V. L.; Sulaiman, J.

    2017-09-01

    Partial differential equations that are used in describing the nonlinear heat and mass transfer phenomena are difficult to be solved. For the case where the exact solution is difficult to be obtained, it is necessary to use a numerical procedure such as the finite difference method to solve a particular partial differential equation. In term of numerical procedure, a particular method can be considered as an efficient method if the method can give an approximate solution within the specified error with the least computational complexity. Throughout this paper, the two-dimensional Porous Medium Equation (2D PME) is discretized by using the implicit finite difference scheme to construct the corresponding approximation equation. Then this approximation equation yields a large-sized and sparse nonlinear system. By using the Newton method to linearize the nonlinear system, this paper deals with the application of the Four-Point Newton-EGSOR (4NEGSOR) iterative method for solving the 2D PMEs. In addition to that, the efficiency of the 4NEGSOR iterative method is studied by solving three examples of the problems. Based on the comparative analysis, the Newton-Gauss-Seidel (NGS) and the Newton-SOR (NSOR) iterative methods are also considered. The numerical findings show that the 4NEGSOR method is superior to the NGS and the NSOR methods in terms of the number of iterations to get the converged solutions, the time of computation and the maximum absolute errors produced by the methods.

  18. Efficient iterative methods applied to the solution of transonic flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wissink, A.M.; Lyrintzis, A.S.; Chronopoulos, A.T.

    1996-02-01

    We investigate the use of an inexact Newton`s method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton`s method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GIVIRES method. The preconditionermore » is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton- GIVIRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems. 38 refs., 14 figs., 7 tabs.« less

  19. CUDA GPU based full-Stokes finite difference modelling of glaciers

    NASA Astrophysics Data System (ADS)

    Brædstrup, C. F.; Egholm, D. L.

    2012-04-01

    Many have stressed the limitations of using the shallow shelf and shallow ice approximations when modelling ice streams or surging glaciers. Using a full-stokes approach requires either large amounts of computer power or time and is therefore seldom an option for most glaciologists. Recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists. Our full-stokes ice sheet model implements a Red-Black Gauss-Seidel iterative linear solver to solve the full stokes equations. This technique has proven very effective when applied to the stokes equation in geodynamics problems, and should therefore also preform well in glaciological flow probems. The Gauss-Seidel iterator is known to be robust but several other linear solvers have a much faster convergence. To aid convergence, the solver uses a multigrid approach where values are interpolated and extrapolated between different grid resolutions to minimize the short wavelength errors efficiently. This reduces the iteration count by several orders of magnitude. The run-time is further reduced by using the GPGPU technology where each card has up to 448 cores. Researchers utilizing the GPGPU technique in other areas have reported between 2 - 11 times speedup compared to multicore CPU implementations on similar problems. The goal of these initial investigations into the possible usage of GPGPU technology in glacial modelling is to apply the enhanced resolution of a full-stokes solver to ice streams and surging glaciers. This is a area of growing interest because ice streams are the main drainage conjugates for large ice sheets. It is therefore crucial to understand this streaming behavior and it's impact up-ice.

  20. Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method

    NASA Astrophysics Data System (ADS)

    Mehl, S.

    2012-12-01

    Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.

  1. Modifications Of Hydrostatic-Bearing Computer Program

    NASA Technical Reports Server (NTRS)

    Hibbs, Robert I., Jr.; Beatty, Robert F.

    1991-01-01

    Several modifications made to enhance utility of HBEAR, computer program for analysis and design of hydrostatic bearings. Modifications make program applicable to more realistic cases and reduce time and effort necessary to arrive at a suitable design. Uses search technique to iterate on size of orifice to obtain required pressure ratio.

  2. Discuss Similarity Using Visual Intuition

    ERIC Educational Resources Information Center

    Cox, Dana C.; Lo, Jane-Jane

    2012-01-01

    The change in size from a smaller shape to a larger similar shape (or vice versa) is created through continuous proportional stretching or shrinking in every direction. Students cannot solve similarity tasks simply by iterating or partitioning a composed unit, strategies typically used on numerical proportional tasks. The transition to thinking…

  3. Fast polar decomposition of an arbitrary matrix

    NASA Technical Reports Server (NTRS)

    Higham, Nicholas J.; Schreiber, Robert S.

    1988-01-01

    The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.

  4. A Robust Locally Preconditioned Semi-Coarsening Multigrid Algorithm for the 2-D Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Cain, Michael D.

    1999-01-01

    The goal of this thesis is to develop an efficient and robust locally preconditioned semi-coarsening multigrid algorithm for the two-dimensional Navier-Stokes equations. This thesis examines the performance of the multigrid algorithm with local preconditioning for an upwind-discretization of the Navier-Stokes equations. A block Jacobi iterative scheme is used because of its high frequency error mode damping ability. At low Mach numbers, the performance of a flux preconditioner is investigated. The flux preconditioner utilizes a new limiting technique based on local information that was developed by Siu. Full-coarsening and-semi-coarsening are examined as well as the multigrid V-cycle and full multigrid. The numerical tests were performed on a NACA 0012 airfoil at a range of Mach numbers. The tests show that semi-coarsening with flux preconditioning is the most efficient and robust combination of coarsening strategy, and iterative scheme - especially at low Mach numbers.

  5. [Target volume segmentation of PET images by an iterative method based on threshold value].

    PubMed

    Castro, P; Huerga, C; Glaría, L A; Plaza, R; Rodado, S; Marín, M D; Mañas, A; Serrada, A; Núñez, L

    2014-01-01

    An automatic segmentation method is presented for PET images based on an iterative approximation by threshold value that includes the influence of both lesion size and background present during the acquisition. Optimal threshold values that represent a correct segmentation of volumes were determined based on a PET phantom study that contained different sizes spheres and different known radiation environments. These optimal values were normalized to background and adjusted by regression techniques to a two-variable function: lesion volume and signal-to-background ratio (SBR). This adjustment function was used to build an iterative segmentation method and then, based in this mention, a procedure of automatic delineation was proposed. This procedure was validated on phantom images and its viability was confirmed by retrospectively applying it on two oncology patients. The resulting adjustment function obtained had a linear dependence with the SBR and was inversely proportional and negative with the volume. During the validation of the proposed method, it was found that the volume deviations respect to its real value and CT volume were below 10% and 9%, respectively, except for lesions with a volume below 0.6 ml. The automatic segmentation method proposed can be applied in clinical practice to tumor radiotherapy treatment planning in a simple and reliable way with a precision close to the resolution of PET images. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  6. Oscillatory Critical Amplitudes in Hierarchical Models and the Harris Function of Branching Processes

    NASA Astrophysics Data System (ADS)

    Costin, Ovidiu; Giacomin, Giambattista

    2013-02-01

    Oscillatory critical amplitudes have been repeatedly observed in hierarchical models and, in the cases that have been taken into consideration, these oscillations are so small to be hardly detectable. Hierarchical models are tightly related to iteration of maps and, in fact, very similar phenomena have been repeatedly reported in many fields of mathematics, like combinatorial evaluations and discrete branching processes. It is precisely in the context of branching processes with bounded off-spring that T. Harris, in 1948, first set forth the possibility that the logarithm of the moment generating function of the rescaled population size, in the super-critical regime, does not grow near infinity as a power, but it has an oscillatory prefactor (the Harris function). These oscillations have been observed numerically only much later and, while the origin is clearly tied to the discrete character of the iteration, the amplitude size is not so well understood. The purpose of this note is to reconsider the issue for hierarchical models and in what is arguably the most elementary setting—the pinning model—that actually just boils down to iteration of polynomial maps (and, notably, quadratic maps). In this note we show that the oscillatory critical amplitude for pinning models and the Harris function coincide. Moreover we make explicit the link between these oscillatory functions and the geometry of the Julia set of the map, making thus rigorous and quantitative some ideas set forth in Derrida et al. (Commun. Math. Phys. 94:115-132, 1984).

  7. An efficient parallel algorithm: Poststack and prestack Kirchhoff 3D depth migration using flexi-depth iterations

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh

    2015-07-01

    This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.

  8. Prospects for Advanced Tokamak Operation of ITER

    NASA Astrophysics Data System (ADS)

    Neilson, George H.

    1996-11-01

    Previous studies have identified steady-state (or "advanced") modes for ITER, based on reverse-shear profiles and significant bootstrap current. A typical example has 12 MA of plasma current, 1,500 MW of fusion power, and 100 MW of heating and current-drive power. The implementation of these and other steady-state operating scenarios in the ITER device is examined in order to identify key design modifications that can enhance the prospects for successfully achieving advanced tokamak operating modes in ITER compatible with a single null divertor design. In particular, we examine plasma configurations that can be achieved by the ITER poloidal field system with either a monolithic central solenoid (as in the ITER Interim Design), or an alternate "hybrid" central solenoid design which provides for greater flexibility in the plasma shape. The increased control capability and expanded operating space provided by the hybrid central solenoid allows operation at high triangularity (beneficial for improving divertor performance through control of edge-localized modes and for increasing beta limits), and will make it much easier for ITER operators to establish an optimum startup trajectory leading to a high-performance, steady-state scenario. Vertical position control is examined because plasmas made accessible by the hybrid central solenoid can be more elongated and/or less well coupled to the conducting structure. Control of vertical-displacements using the external PF coils remains feasible over much of the expanded operating space. Further work is required to define the full spectrum of axisymmetric plasma disturbances requiring active control In addition to active axisymmetric control, advanced tokamak modes in ITER may require active control of kink modes on the resistive time scale of the conducting structure. This might be accomplished in ITER through the use of active control coils external to the vacuum vessel which are actuated by magnetic sensors near the first wall. The enhanced shaping and positioning flexibility provides a range of options for reducing the ripple-induced losses of fast alpha particles--a major limitation on ITER steady-state modes. An alternate approach that we are pursuing in parallel is the inclusion of ferromagnetic inserts to reduce the toroidal field ripple within the plasma chamber. The inclusion of modest design changes such as the hybrid central solenoid, active control coils for kink modes, and ferromagnetic inserts for TF ripple reduction show can greatly increase the flexibility to accommodate advance tokamak operation in ITER. Increased flexibility is important because the optimum operating scenario for ITER cannot be predicted with certainty. While low-inductance, reverse shear modes appear attractive for steady-state operation, high-inductance, high-beta modes are also viable candidates, and it is important that ITER have the flexibility to explore both these, and other, operating regimes.

  9. Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D

    NASA Astrophysics Data System (ADS)

    La Haye, R. J.; Paz-Soldan, C.; Strait, E. J.

    2015-02-01

    DIII-D experiments show that fully penetrated resonant n = 1 error field locked modes in ohmic plasmas with safety factor q95 ≳ 3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n = 2/1) static error fields are shielded in ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption. Error field correction (EFC) is performed on DIII-D (in ITER relevant shape and safety factor q95 ≳ 3) with either the n = 1 C-coil (no handedness) or the n = 1 I-coil (with ‘dominantly’ resonant field pitch). Despite EFC, which allows significantly lower plasma density (a ‘figure of merit’) before penetration occurs, the resulting saturated islands have similar large size; they differ only in the phase of the locked mode after typically being pulled (by up to 30° toroidally) in the electron diamagnetic drift direction as they grow to saturation. Island amplification and phase shift are explained by a second change-of-state in which the classical tearing index changes from stable to marginal by the presence of the island, which changes the current density profile. The eventual island size is thus governed by the inherent stability and saturation mechanism rather than the driving error field.

  10. Performance of spectral MSE diagnostic on C-Mod and ITER

    NASA Astrophysics Data System (ADS)

    Liao, Ken; Rowan, William; Mumgaard, Robert; Granetz, Robert; Scott, Steve; Marchuk, Oleksandr; Ralchenko, Yuri; Alcator C-Mod Team

    2015-11-01

    Magnetic field was measured on Alcator C-mod by applying spectral Motional Stark Effect techniques based on line shift (MSE-LS) and line ratio (MSE-LR) to the H-alpha emission spectrum of the diagnostic neutral beam atoms. The high field of Alcator C-mod allows measurements to be made at close to ITER values of Stark splitting (~ Bv⊥) with similar background levels to those expected for ITER. Accurate modeling of the spectrum requires a non-statistical, collisional-radiative analysis of the excited beam population and quadratic and Zeeman corrections to the Stark shift. A detailed synthetic diagnostic was developed and used to estimate the performance of the diagnostic at C-Mod and ITER parameters. Our analysis includes the sensitivity to view and beam geometry, aperture and divergence broadening, magnetic field, pixel size, background noise, and signal levels. Analysis of preliminary experiments agree with Kinetic+(polarization)MSE EFIT within ~2° in pitch angle and simulations predict uncertainties of 20 mT in | B | and <2° in pitch angle. This material is based upon work supported by the U.S. Department of Energy Office of Science, Office of Fusion Energy Sciences under Award Number DE-FG03-96ER-54373 and DE-FC02-99ER54512.

  11. Boundary plasma heat flux width measurements for poloidal magnetic fields above 1 Tesla in the Alcator C-Mod tokamak

    NASA Astrophysics Data System (ADS)

    Brunner, Dan; Labombard, Brian; Kuang, Adam; Terry, Jim; Alcator C-Mod Team

    2017-10-01

    The boundary heat flux width, along with the total power flowing into the boundary, sets the power exhaust challenge for tokamaks. A multi-machine boundary heat flux width database found that the heat flux width in H-modes scaled inversely with poloidal magnetic field (Bp) and was independent of machine size. The maximum Bp in the database was 0.8 T, whereas the ITER 15 MA, Q =10 scenario will be 1.2 T. New measurements of the boundary heat flux width in Alcator C-Mod extend the international database to plasmas with Bp up to 1.3 T. C-Mod was the only experiment able to operate at ITER-level Bp. These new measurements are from over 300 plasma shots in L-, I-, and EDA H-modes spanning essentially the whole operating space in C-Mod. We find that the inverse-Bp dependence of the heat flux width in H-modes continues to ITER-level Bp, further reinforcing the empirical projection of 500 μm heat flux width for ITER. We find 50% scatter around the inverse-Bp scaling and are searching for the `hidden variables' causing this scatter. Supported by USDoE award DE-FC02-99ER54512.

  12. Cold Test and Performance Evaluation of Prototype Cryoline-X

    NASA Astrophysics Data System (ADS)

    Shah, N.; Choukekar, K.; Kapoor, H.; Muralidhara, S.; Garg, A.; Kumar, U.; Jadon, M.; Dash, B.; Bhattachrya, R.; Badgujar, S.; Billot, V.; Bravais, P.; Cadeau, P.

    2017-12-01

    The multi-process pipe vacuum jacketed cryolines for the ITER project are probably world’s most complex cryolines in terms of layout, load cases, quality, safety and regulatory requirements. As a risk mitigation plan, design, manufacturing and testing of prototype cryoline (PTCL) was planned before the approval of final design of ITER cryolines. The 29 meter long PTCL consist of 6 process pipes encased by thermal shield inside Outer Vacuum Jacket of DN 600 size and carries cold helium at 4.5 K and 80 K. The global heat load limit was defined as 1.2 W/m at 4.5 K and 4.5 W/m at 80 K. The PTCL-X (PTCL for Group-X cryolines) was specified in detail by ITER-India and designed as well as manufactured by Air Liquide. PTCL-X was installed and tested at cryogenic temperature at ITER-India Cryogenic Laboratory in 2016. The heat load at 4.5 K and 80 K, estimated using enthalpy difference method, was found to be approximately 0.8 W/m at 4.5 K, 4.2 W/m at 80 K, which is well within the defined limits. Thermal shield temperature profile was also found to be satisfactory. Paper summarizes the cold test results of PTCL-X

  13. Region of interest processing for iterative reconstruction in x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Kopp, Felix K.; Nasirudin, Radin A.; Mei, Kai; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Noël, Peter B.

    2015-03-01

    The recent advancements in the graphics card technology raised the performance of parallel computing and contributed to the introduction of iterative reconstruction methods for x-ray computed tomography in clinical CT scanners. Iterative maximum likelihood (ML) based reconstruction methods are known to reduce image noise and to improve the diagnostic quality of low-dose CT. However, iterative reconstruction of a region of interest (ROI), especially ML based, is challenging. But for some clinical procedures, like cardiac CT, only a ROI is needed for diagnostics. A high-resolution reconstruction of the full field of view (FOV) consumes unnecessary computation effort that results in a slower reconstruction than clinically acceptable. In this work, we present an extension and evaluation of an existing ROI processing algorithm. Especially improvements for the equalization between regions inside and outside of a ROI are proposed. The evaluation was done on data collected from a clinical CT scanner. The performance of the different algorithms is qualitatively and quantitatively assessed. Our solution to the ROI problem provides an increase in signal-to-noise ratio and leads to visually less noise in the final reconstruction. The reconstruction speed of our technique was observed to be comparable with other previous proposed techniques. The development of ROI processing algorithms in combination with iterative reconstruction will provide higher diagnostic quality in the near future.

  14. Review of particle-in-cell modeling for the extraction region of large negative hydrogen ion sources for fusion

    NASA Astrophysics Data System (ADS)

    Wünderlich, D.; Mochalskyy, S.; Montellano, I. M.; Revel, A.

    2018-05-01

    Particle-in-cell (PIC) codes are used since the early 1960s for calculating self-consistently the motion of charged particles in plasmas, taking into account external electric and magnetic fields as well as the fields created by the particles itself. Due to the used very small time steps (in the order of the inverse plasma frequency) and mesh size, the computational requirements can be very high and they drastically increase with increasing plasma density and size of the calculation domain. Thus, usually small computational domains and/or reduced dimensionality are used. In the last years, the available central processing unit (CPU) power strongly increased. Together with a massive parallelization of the codes, it is now possible to describe in 3D the extraction of charged particles from a plasma, using calculation domains with an edge length of several centimeters, consisting of one extraction aperture, the plasma in direct vicinity of the aperture, and a part of the extraction system. Large negative hydrogen or deuterium ion sources are essential parts of the neutral beam injection (NBI) system in future fusion devices like the international fusion experiment ITER and the demonstration reactor (DEMO). For ITER NBI RF driven sources with a source area of 0.9 × 1.9 m2 and 1280 extraction apertures will be used. The extraction of negative ions is accompanied by the co-extraction of electrons which are deflected onto an electron dump. Typically, the maximum negative extracted ion current is limited by the amount and the temporal instability of the co-extracted electrons, especially for operation in deuterium. Different PIC codes are available for the extraction region of large driven negative ion sources for fusion. Additionally, some effort is ongoing in developing codes that describe in a simplified manner (coarser mesh or reduced dimensionality) the plasma of the whole ion source. The presentation first gives a brief overview of the current status of the ion source development for ITER NBI and of the PIC method. Different PIC codes for the extraction region are introduced as well as the coupling to codes describing the whole source (PIC codes or fluid codes). Presented and discussed are different physical and numerical aspects of applying PIC codes to negative hydrogen ion sources for fusion as well as selected code results. The main focus of future calculations will be the meniscus formation and identifying measures for reducing the co-extracted electrons, in particular for deuterium operation. The recent results of the 3D PIC code ONIX (calculation domain: one extraction aperture and its vicinity) for the ITER prototype source (1/8 size of the ITER NBI source) are presented.

  15. Upgrade of the BATMAN test facility for H- source development

    NASA Astrophysics Data System (ADS)

    Heinemann, B.; Fröschle, M.; Falter, H.-D.; Fantz, U.; Franzen, P.; Kraus, W.; Nocentini, R.; Riedl, R.; Ruf, B.

    2015-04-01

    The development of a radio frequency (RF) driven source for negative hydrogen ions for the neutral beam heating devices of fusion experiments has been successfully carried out at IPP since 1996 on the test facility BATMAN. The required ITER parameters have been achieved with the prototype source consisting of a cylindrical driver on the back side of a racetrack like expansion chamber. The extraction system, called "Large Area Grid" (LAG) was derived from a positive ion accelerator from ASDEX Upgrade (AUG) using its aperture size (ø 8 mm) and pattern but replacing the first two electrodes and masking down the extraction area to 70 cm2. BATMAN is a well diagnosed and highly flexible test facility which will be kept operational in parallel to the half size ITER source test facility ELISE for further developments to improve the RF efficiency and the beam properties. It is therefore planned to upgrade BATMAN with a new ITER-like grid system (ILG) representing almost one ITER beamlet group, namely 5 × 14 apertures (ø 14 mm). Additionally to the standard three grid extraction system a repeller electrode upstream of the grounded grid can optionally be installed which is positively charged against it by 2 kV. This is designated to affect the onset of the space charge compensation downstream of the grounded grid and to reduce the backstreaming of positive ions from the drift space backwards into the ion source. For magnetic filter field studies a plasma grid current up to 3 kA will be available as well as permanent magnets embedded into a diagnostic flange or in an external magnet frame. Furthermore different source vessels and source configurations are under discussion for BATMAN, e.g. using the AUG type racetrack RF source as driver instead of the circular one or modifying the expansion chamber for a more flexible position of the external magnet frame.

  16. Modelling of caesium dynamics in the negative ion sources at BATMAN and ELISE

    NASA Astrophysics Data System (ADS)

    Mimo, A.; Wimmer, C.; Wünderlich, D.; Fantz, U.

    2017-08-01

    The knowledge of Cs dynamics in negative hydrogen ion sources is a primary issue to achieve the ITER requirements for the Neutral Beam Injection (NBI) systems, i.e. one hour operation with an accelerated ion current of 40 A of D- and a ratio between negative ions and co-extracted electrons below one. Production of negative ions is mostly achieved by conversion of hydrogen/deuterium atoms on a converter surface, which is caesiated in order to reduce the work function and increase the conversion efficiency. The understanding of the Cs transport and redistribution mechanism inside the source is necessary for the achievement of high performances. Cs dynamics was therefore investigated by means of numerical simulations performed with the Monte Carlo transport code CsFlow3D. Simulations of the prototype source (1/8 of the ITER NBI source size) have shown that the plasma distribution inside the source has the major effect on Cs dynamics during the pulse: asymmetry of the plasma parameters leads to asymmetry in Cs distribution in front of the plasma grid. The simulated time traces and the general simulation results are in agreement with the experimental measurements. Simulations performed for the ELISE testbed (half of the ITER NBI source size) have shown an effect of the vacuum phase time on the amount and stability of Cs during the pulse. The sputtering of Cs due to back-streaming ions was reproduced by the simulations and it is in agreement with the experimental observation: this can become a critical issue during long pulses, especially in case of continuous extraction as foreseen for ITER. These results and the acquired knowledge of Cs dynamics will be useful to have a better management of Cs and thus to reduce its consumption, in the direction of the demonstration fusion power plant DEMO.

  17. Influence of Co-57 and CT Transmission Measurements on the Quantification Accuracy and Partial Volume Effect of a Small Animal PET Scanner.

    PubMed

    Mannheim, Julia G; Schmid, Andreas M; Pichler, Bernd J

    2017-12-01

    Non-invasive in vivo positron emission tomography (PET) provides high detection sensitivity in the nano- to picomolar range and in addition to other advantages, the possibility to absolutely quantify the acquired data. The present study focuses on the comparison of transmission data acquired with an X-ray computed tomography (CT) scanner or a Co-57 source for the Inveon small animal PET scanner (Siemens Healthcare, Knoxville, TN, USA), as well as determines their influences on the quantification accuracy and partial volume effect (PVE). A special focus included the impact of the performed calibration on the quantification accuracy. Phantom measurements were carried out to determine the quantification accuracy, the influence of the object size on the quantification, and the PVE for different sphere sizes, along the field of view and for different contrast ratios. An influence of the emission activity on the Co-57 transmission measurements was discovered (deviations up to 24.06 % measured to true activity), whereas no influence of the emission activity on the CT attenuation correction was identified (deviations <3 % for measured to true activity). The quantification accuracy was substantially influenced by the applied calibration factor and by the object size. The PVE demonstrated a dependency on the sphere size, the position within the field of view, the reconstruction and correction algorithms and the count statistics. Depending on the reconstruction algorithm, only ∼30-40 % of the true activity within a small sphere could be resolved. The iterative 3D reconstruction algorithms uncovered substantially increased recovery values compared to the analytical and 2D iterative reconstruction algorithms (up to 70.46 % and 80.82 % recovery for the smallest and largest sphere using iterative 3D reconstruction algorithms). The transmission measurement (CT or Co-57 source) to correct for attenuation did not severely influence the PVE. The analysis of the quantification accuracy and the PVE revealed an influence of the object size, the reconstruction algorithm and the applied corrections. Particularly, the influence of the emission activity during the transmission measurement performed with a Co-57 source must be considered. To receive comparable results, also among different scanner configurations, standardization of the acquisition (imaging parameters, as well as applied reconstruction and correction protocols) is necessary.

  18. Semi-Technical Cryogenic Molecular Sieve Bed for the Tritium Extraction System of the Test Blanket Module for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beloglazov, S.; Bekris, N.; Glugla, M.

    2005-07-15

    The tritium extraction from the ITER Helium Cooled Pebble Bed (HCPB) Test Blanket Module purge gas is proposed to be performed in a two steps process: trapping water in a cryogenic Cold Trap, and adsorption of hydrogen isotopes (H{sub 2}, HT, T{sub 2}) as well as impurities (N{sub 2}, O{sub 2}) in a Cryogenic Molecular Sieve Bed (CMSB) at 77K. A CMSB in a semi-technical scale (one-sixth of the flow rate of the ITER-HCPB) was design and constructed at the Forschungszentrum Karlsruhe. The full capacity of CMSB filled with 20 kg of MS-5A was calculated based on adsorption isotherm datamore » to be 9.4 mol of H{sub 2} at partial pressure 120 Pa. The breakthrough tests at flow rates up to 2 Nm{sup 3}h{sup -1} of He with 110 Pa of H{sub 2} conformed with good agreement the adsorption capacity of the CMSB. The mass-transfer zone was found to be relatively narrow (12.5 % of the MS Bed height) allowing to scale up the CMSB to ITER flow rates.« less

  19. Computation of nonlinear ultrasound fields using a linearized contrast source method.

    PubMed

    Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A

    2013-08-01

    Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.

  20. Development of the ITER ICH Transmission Line and Matching System

    NASA Astrophysics Data System (ADS)

    Rasmussen, D. A.; Goulding, R. H.; Pesavento, P. V.; Peters, B.; Swain, D. W.; Fredd, E. H.; Hosea, J.; Greenough, N.

    2011-10-01

    The ITER Ion Cyclotron Heating (ICH) System is designed to couple 20 MW of heating power for ion and electron heating. Prototype components for the ITER Ion Cyclotron Heating (ICH) transmission line and matching system are being designed and tested. The ICH transmission lines are pressurized 300 mm diameter coaxial lines with water-cooled aluminum outer conductor and gas-cooled and water-cooled copper inner conductor. Each ICH transmission line is designed to handle 40-55 MHz power at up to 6 MW/line. A total of 8 lines split to 16 antenna inputs on two ICH antennas. Industrial suppliers have designed coaxial transmission line and matching components and prototypes will be manufactured. The prototype components will be qualified on a test stand operating at the full power and pulse length needed for ITER. The matching system must accommodated dynamic changes in the plasma loading due to ELMS and the L to H-mode transition. Passive ELM tolerance will be performed using hybrid couplers and loads, which can absorb the transient reflected power. The system is also designed to compensate for the mutual inductances of the antenna current straps to limit the peak voltages on the antenna array elements.

  1. Surface-structured diffuser by iterative down-size molding with glass sintering technology.

    PubMed

    Lee, Xuan-Hao; Tsai, Jung-Lin; Ma, Shih-Hsin; Sun, Ching-Cherng

    2012-03-12

    In this paper, a down-size sintering scheme for making high-performance diffusers with micro structure to perform beam shaping is presented and demonstrated. By using down-size sintering method, a surface-structure film is designed and fabricated to verify the feasibility of the sintering technology, in which up to 1/8 dimension reduction has been achieved. Besides, a special impressing technology has been applied to fabricate diffuser film with various materials and the transmission efficiency is as high as 85% and above. By introducing the diffuser into possible lighting applications, the diffusers have been shown high performance in glare reduction, beam shaping and energy saving.

  2. Protograph LDPC Codes Over Burst Erasure Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.

  3. An atomistic simulation scheme for modeling crystal formation from solution.

    PubMed

    Kawska, Agnieszka; Brickmann, Jürgen; Kniep, Rüdiger; Hochrein, Oliver; Zahn, Dirk

    2006-01-14

    We present an atomistic simulation scheme for investigating crystal growth from solution. Molecular-dynamics simulation studies of such processes typically suffer from considerable limitations concerning both system size and simulation times. In our method this time-length scale problem is circumvented by an iterative scheme which combines a Monte Carlo-type approach for the identification of ion adsorption sites and, after each growth step, structural optimization of the ion cluster and the solvent by means of molecular-dynamics simulation runs. An important approximation of our method is based on assuming full structural relaxation of the aggregates between each of the growth steps. This concept only holds for compounds of low solubility. To illustrate our method we studied CaF2 aggregate growth from aqueous solution, which may be taken as prototypes for compounds of very low solubility. The limitations of our simulation scheme are illustrated by the example of NaCl aggregation from aqueous solution, which corresponds to a solute/solvent combination of very high salt solubility.

  4. Using an integrative mock-up simulation approach for evidence-based evaluation of operating room design prototypes.

    PubMed

    Bayramzadeh, Sara; Joseph, Anjali; Allison, David; Shultz, Jonas; Abernathy, James

    2018-07-01

    This paper describes the process and tools developed as part of a multidisciplinary collaborative simulation-based approach for iterative design and evaluation of operating room (OR) prototypes. Full-scale physical mock-ups of healthcare spaces offer an opportunity to actively communicate with and to engage multidisciplinary stakeholders in the design process. While mock-ups are increasingly being used in healthcare facility design projects, they are rarely evaluated in a manner to support active user feedback and engagement. Researchers and architecture students worked closely with clinicians and architects to develop OR design prototypes and engaged clinical end-users in simulated scenarios. An evaluation toolkit was developed to compare design prototypes. The mock-up evaluation helped the team make key decisions about room size, location of OR table, intra-room zoning, and doors location. Structured simulation based mock-up evaluations conducted in the design process can help stakeholders visualize their future workspace and provide active feedback. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Efficient preconditioning of the electronic structure problem in large scale ab initio molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiffmann, Florian; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch

    2015-06-28

    We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filteringmore » small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.« less

  6. Enhancing Scalability and Efficiency of the TOUGH2_MP for LinuxClusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Keni; Wu, Yu-Shu

    2006-04-17

    TOUGH2{_}MP, the parallel version TOUGH2 code, has been enhanced by implementing more efficient communication schemes. This enhancement is achieved through reducing the amount of small-size messages and the volume of large messages. The message exchange speed is further improved by using non-blocking communications for both linear and nonlinear iterations. In addition, we have modified the AZTEC parallel linear-equation solver to nonblocking communication. Through the improvement of code structuring and bug fixing, the new version code is now more stable, while demonstrating similar or even better nonlinear iteration converging speed than the original TOUGH2 code. As a result, the new versionmore » of TOUGH2{_}MP is improved significantly in its efficiency. In this paper, the scalability and efficiency of the parallel code are demonstrated by solving two large-scale problems. The testing results indicate that speedup of the code may depend on both problem size and complexity. In general, the code has excellent scalability in memory requirement as well as computing time.« less

  7. Preliminary evaluation of cryogenic two-phase flow imaging using electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Xie, Huangjun; Yu, Liu; Zhou, Rui; Qiu, Limin; Zhang, Xiaobin

    2017-09-01

    The potential application of the 2-D eight-electrode electrical capacitance tomography (ECT) to the inversion imaging of the liquid nitrogen-vaporous nitrogen (LN2-VN2) flow in the tube is theoretically evaluated. The phase distribution of the computational domain is obtained using the simultaneous iterative reconstruction technique with variable iterative step size. The detailed mathematical derivations for the calculations are presented. The calculated phase distribution for the two detached LN2 column case shows the comparable results with the water-air case, regardless of the much reduced dielectric permittivity of LN2 compared with water. The inversion images of total eight different LN2-VN2 flow patterns are presented and quantitatively evaluated by calculating the relative void fraction error and the correlation coefficient. The results demonstrate that the developed reconstruction technique for ECT has the capacity to reconstruct the phase distribution of the complex LN2-VN2 flow, while the accuracy of the inversion images is significantly influenced by the size of the discrete phase. The influence of the measurement noise on the image quality is also considered in the calculations.

  8. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Electron Cyclotron power management for control of Neoclassical Tearing Modes in the ITER baseline scenario

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poli, Francesca M.; Fredrickson, Eric; Henderson, Mark A.

    Time-dependent simulations are used to evolve plasma discharges in combination with a Modified Rutherford equation (MRE) for calculation of Neoclassical Tearing Mode (NTM) stability in response to Electron Cyclotron (EC) feedback control in ITER. The main application of this integrated approach is to support the development of control algorithms by analyzing the plasma response with physics-based models and to assess how uncertainties in the detection of the magnetic island and in the EC alignment affect the ability of the ITER EC system to fulfill its purpose. These simulations indicate that it is critical to detect the island as soon asmore » possible, before its size exceeds the EC deposition width, and that maintaining alignment with the rational surface within half of the EC deposition width is needed for stabilization and suppression of the modes, especially in the case of modes with helicity (2,1). A broadening of the deposition profile, for example due to wave scattering by turbulence fluctuations or not well aligned beams, could even be favorable in the case of the (2,1)-NTM, by relaxing an over-focussing of the EC beam and improving the stabilization at the mode onset. Pre-emptive control reduces the power needed for suppression and stabilization in the ITER baseline discharge to a maximum of 5 MW, which should be reserved and available to the Upper Launcher during the entire flattop phase. By assuming continuous triggering of NTMs, with pre-emptive control ITER would be still able to demonstrate a fusion gain of Q=10.« less

  10. Electron cyclotron power management for control of neoclassical tearing modes in the ITER baseline scenario

    NASA Astrophysics Data System (ADS)

    Poli, F. M.; Fredrickson, E. D.; Henderson, M. A.; Kim, S.-H.; Bertelli, N.; Poli, E.; Farina, D.; Figini, L.

    2018-01-01

    Time-dependent simulations are used to evolve plasma discharges in combination with a modified Rutherford equation for calculation of neoclassical tearing mode (NTM) stability in response to electron cyclotron (EC) feedback control in ITER. The main application of this integrated approach is to support the development of control algorithms by analyzing the plasma response with physics-based models and to assess how uncertainties in the detection of the magnetic island and in the EC alignment affect the ability of the ITER EC system to fulfill its purpose. Simulations indicate that it is critical to detect the island as soon as possible, before its size exceeds the EC deposition width, and that maintaining alignment with the rational surface within half of the EC deposition width is needed for stabilization and suppression of the modes, especially in the case of modes with helicity (2, 1) . A broadening of the deposition profile, for example due to wave scattering by turbulence fluctuations or not well aligned beams, could even be favorable in the case of the (2, 1)- NTM, by relaxing an over-focussing of the EC beam and improving the stabilization at the mode onset. Pre-emptive control reduces the power needed for suppression and stabilization in the ITER baseline discharge to a maximum of 5 MW, which should be reserved and available to the upper launcher during the entire flattop phase. Assuming continuous triggering of NTMs, with pre-emptive control ITER would be still able to demonstrate a fusion gain of Q=10 .

  11. Electron Cyclotron power management for control of Neoclassical Tearing Modes in the ITER baseline scenario

    DOE PAGES

    Poli, Francesca M.; Fredrickson, Eric; Henderson, Mark A.; ...

    2017-09-21

    Time-dependent simulations are used to evolve plasma discharges in combination with a Modified Rutherford equation (MRE) for calculation of Neoclassical Tearing Mode (NTM) stability in response to Electron Cyclotron (EC) feedback control in ITER. The main application of this integrated approach is to support the development of control algorithms by analyzing the plasma response with physics-based models and to assess how uncertainties in the detection of the magnetic island and in the EC alignment affect the ability of the ITER EC system to fulfill its purpose. These simulations indicate that it is critical to detect the island as soon asmore » possible, before its size exceeds the EC deposition width, and that maintaining alignment with the rational surface within half of the EC deposition width is needed for stabilization and suppression of the modes, especially in the case of modes with helicity (2,1). A broadening of the deposition profile, for example due to wave scattering by turbulence fluctuations or not well aligned beams, could even be favorable in the case of the (2,1)-NTM, by relaxing an over-focussing of the EC beam and improving the stabilization at the mode onset. Pre-emptive control reduces the power needed for suppression and stabilization in the ITER baseline discharge to a maximum of 5 MW, which should be reserved and available to the Upper Launcher during the entire flattop phase. By assuming continuous triggering of NTMs, with pre-emptive control ITER would be still able to demonstrate a fusion gain of Q=10.« less

  12. Static shape of an acoustically levitated drop with wave-drop interaction

    NASA Astrophysics Data System (ADS)

    Lee, C. P.; Anilkumar, A. V.; Wang, T. G.

    1994-11-01

    The static shape of a drop levitated and flattened by an acoustic standing wave field in air is calculated, requiring self-consistency between the drop shape and the wave. The wave is calculated for a given shape using the boundary integral method. From the resulting radiation stress on the drop surface, the shape is determined by solving the Young-Laplace equation, completing an iteration cycle. The iteration is continued until both the shape and the wave converge. Of particular interest are the shapes of large drops that sustain equilibrium, beyond a certain degree of flattening, by becoming more flattened at a decreasing sound pressure level. The predictions for flattening versus acoustic radiation stress, for drops of different sizes, compare favorably with experimental data.

  13. Techniques and Software for Monolithic Preconditioning of Moderately-sized Geodynamic Stokes Flow Problems

    NASA Astrophysics Data System (ADS)

    Sanan, Patrick; May, Dave A.; Schenk, Olaf; Bollhöffer, Matthias

    2017-04-01

    Geodynamics simulations typically involve the repeated solution of saddle-point systems arising from the Stokes equations. These computations often dominate the time to solution. Direct solvers are known for their robustness and ``black box'' properties, yet exhibit superlinear memory requirements and time to solution. More complex multilevel-preconditioned iterative solvers have been very successful for large problems, yet their use can require more effort from the practitioner in terms of setting up a solver and choosing its parameters. We champion an intermediate approach, based on leveraging the power of modern incomplete factorization techniques for indefinite symmetric matrices. These provide an interesting alternative in situations in between the regimes where direct solvers are an obvious choice and those where complex, scalable, iterative solvers are an obvious choice. That is, much like their relatives for definite systems, ILU/ICC-preconditioned Krylov methods and ILU/ICC-smoothed multigrid methods, the approaches demonstrated here provide a useful addition to the solver toolkit. We present results with a simple, PETSc-based, open-source Q2-Q1 (Taylor-Hood) finite element discretization, in 2 and 3 dimensions, with the Stokes and Lamé (linear elasticity) saddle point systems. Attention is paid to cases in which full-operator incomplete factorization gives an improvement in time to solution over direct solution methods (which may not even be feasible due to memory limitations), without the complication of more complex (or at least, less-automatic) preconditioners or smoothers. As an important factor in the relevance of these tools is their availability in portable software, we also describe open-source PETSc interfaces to the factorization routines.

  14. Radar cross-section reduction based on an iterative fast Fourier transform optimized metasurface

    NASA Astrophysics Data System (ADS)

    Song, Yi-Chuan; Ding, Jun; Guo, Chen-Jiang; Ren, Yu-Hui; Zhang, Jia-Kai

    2016-07-01

    A novel polarization insensitive metasurface with over 25 dB monostatic radar cross-section (RCS) reduction is introduced. The proposed metasurface is comprised of carefully arranged unit cells with spatially varied dimension, which enables approximate uniform diffusion of incoming electromagnetic (EM) energy and reduces the threat from bistatic radar system. An iterative fast Fourier transform (FFT) method for conventional antenna array pattern synthesis is innovatively applied to find the best unit cell geometry parameter arrangement. Finally, a metasurface sample is fabricated and tested to validate RCS reduction behavior predicted by full wave simulation software Ansys HFSSTM and marvelous agreement is observed.

  15. Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT

    PubMed Central

    Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster

    2016-01-01

    Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701

  16. LIDAR TS for ITER core plasma. Part II: simultaneous two wavelength LIDAR TS

    NASA Astrophysics Data System (ADS)

    Gowers, C.; Nielsen, P.; Salzmann, H.

    2017-12-01

    We have shown recently, and in more detail at this conference (Salzmann et al) that the LIDAR approach to ITER core TS measurements requires only two mirrors in the inaccessible port plug area of the machine. This leads to simplified and robust alignment, lower risk of mirror damage by plasma contamination and much simpler calibration, compared with the awkward and vulnerable optical geometry of the conventional imaging TS approach, currently under development by ITER. In the present work we have extended the simulation code used previously to include the case of launching two laser pulses, of different wavelengths, simultaneously in LIDAR geometry. The aim of this approach is to broaden the choice of lasers available for the diagnostic. In the simulation code it is assumed that two short duration (300 ps) laser pulses of different wavelengths, from an Nd:YAG laser are launched through the plasma simultaneously. The temperature and density profiles are deduced in the usual way but from the resulting combined scattered signals in the different spectral channels of the single spectrometer. The spectral response and quantum efficiencies of the detectors used in the simulation are taken from catalogue data for commercially available Hamamatsu MCP-PMTs. The response times, gateability and tolerance to stray light levels of this type of photomultiplier have already been demonstrated in the JET LIDAR system and give sufficient spatial resolution to meet the ITER specification. Here we present the new simulation results from the code. They demonstrate that when the detectors are combined with this two laser, LIDAR approach, the full range of the specified ITER core plasma Te and ne can be measured with sufficient accuracy. So, with commercially available detectors and a simple modification of a Nd:YAG laser similar to that currently being used in the design of the conventional ITER core TS design mentioned above, the ITER requirements can be met.

  17. Iterative updating of model error for Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew

    2018-02-01

    In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.

  18. On the assessment of spatial resolution of PET systems with iterative image reconstruction

    NASA Astrophysics Data System (ADS)

    Gong, Kuang; Cherry, Simon R.; Qi, Jinyi

    2016-03-01

    Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.

  19. The two-phase method for finding a great number of eigenpairs of the symmetric or weakly non-symmetric large eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dul, F.A.; Arczewski, K.

    1994-03-01

    Although it has been stated that [open quotes]an attempt to solve (very large problems) by subspace iterations seems futile[close quotes], we will show that the statement is not true, especially for extremely large eigenproblems. In this paper a new two-phase subspace iteration/Rayleigh quotient/conjugate gradient method for generalized, large, symmetric eigenproblems Ax = [lambda]Bx is presented. It has the ability of solving extremely large eigenproblems, N = 216,000, for example, and finding a large number of leftmost or rightmost eigenpairs, up to 1000 or more. Multiple eigenpairs, even those with multiplicity 100, can be easily found. The use of the proposedmore » method for solving the big full eigenproblems (N [approximately] 10[sup 3]), as well as for large weakly non-symmetric eigenproblems, have been considered also. The proposed method is fully iterative; thus the factorization of matrices ins avoided. The key idea consists in joining two methods: subspace and Rayleigh quotient iterations. The systems of indefinite and almost singular linear equations (a - [sigma]B)x = By are solved by various iterative conjugate gradient method can be used without danger of breaking down due to its property that may be called [open quotes]self-correction towards the eigenvector,[close quotes] discovered recently by us. The use of various preconditioners (SSOR and IC) has also been considered. The main features of the proposed method have been analyzed in detail. Comparisons with other methods, such as, accelerated subspace iteration, Lanczos, Davidson, TLIME, TRACMN, and SRQMCG, are presented. The results of numerical tests for various physical problems (acoustic, vibrations of structures, quantum chemistry) are presented as well. 40 refs., 12 figs., 2 tabs.« less

  20. Overview of ASDEX Upgrade results

    NASA Astrophysics Data System (ADS)

    Zohm, H.; Adamek, J.; Angioni, C.; Antar, G.; Atanasiu, C. V.; Balden, M.; Becker, W.; Behler, K.; Behringer, K.; Bergmann, A.; Bertoncelli, T.; Bilato, R.; Bobkov, V.; Boom, J.; Bottino, A.; Brambilla, M.; Braun, F.; Brüdgam, M.; Buhler, A.; Chankin, A.; Classen, I.; Conway, G. D.; Coster, D. P.; de Marné, P.; D'Inca, R.; Drube, R.; Dux, R.; Eich, T.; Engelhardt, K.; Esposito, B.; Fahrbach, H.-U.; Fattorini, L.; Fink, J.; Fischer, R.; Flaws, A.; Foley, M.; Forest, C.; Fuchs, J. C.; Gál, K.; García Muñoz, M.; Gemisic Adamov, M.; Giannone, L.; Görler, T.; Gori, S.; da Graça, S.; Granucci, G.; Greuner, H.; Gruber, O.; Gude, A.; Günter, S.; Haas, G.; Hahn, D.; Harhausen, J.; Hauff, T.; Heinemann, B.; Herrmann, A.; Hicks, N.; Hobirk, J.; Hölzl, M.; Holtum, D.; Hopf, C.; Horton, L.; Huart, M.; Igochine, V.; Janzer, M.; Jenko, F.; Kallenbach, A.; Kálvin, S.; Kardaun, O.; Kaufmann, M.; Kick, M.; Kirk, A.; Klingshirn, H.-J.; Koscis, G.; Kollotzek, H.; Konz, C.; Krieger, K.; Kurki-Suonio, T.; Kurzan, B.; Lackner, K.; Lang, P. T.; Langer, B.; Lauber, P.; Laux, M.; Leuterer, F.; Likonen, J.; Liu, L.; Lohs, A.; Lunt, T.; Lyssoivan, A.; Maggi, C. F.; Manini, A.; Mank, K.; Manso, M.-E.; Mantsinen, M.; Maraschek, M.; Martin, P.; Mayer, M.; McCarthy, P.; McCormick, K.; Meister, H.; Meo, F.; Merkel, P.; Merkel, R.; Mertens, V.; Merz, F.; Meyer, H.; Mlynek, A.; Monaco, F.; Müller, H.-W.; Münich, M.; Murmann, H.; Neu, G.; Neu, R.; Neuhauser, J.; Nold, B.; Noterdaeme, J.-M.; Pautasso, G.; Pereverzev, G.; Poli, E.; Potzel, S.; Püschel, M.; Pütterich, T.; Pugno, R.; Raupp, G.; Reich, M.; Reiter, B.; Ribeiro, T.; Riedl, R.; Rohde, V.; Roth, J.; Rott, M.; Ryter, F.; Sandmann, W.; Santos, J.; Sassenberg, K.; Sauter, P.; Scarabosio, A.; Schall, G.; Schilling, H.-B.; Schirmer, J.; Schmid, A.; Schmid, K.; Schneider, W.; Schramm, G.; Schrittwieser, R.; Schustereder, W.; Schweinzer, J.; Schweizer, S.; Scott, B.; Seidel, U.; Sempf, M.; Serra, F.; Sertoli, M.; Siccinio, M.; Sigalov, A.; Silva, A.; Sips, A. C. C.; Speth, E.; Stäbler, A.; Stadler, R.; Steuer, K.-H.; Stober, J.; Streibl, B.; Strumberger, E.; Suttrop, W.; Tardini, G.; Tichmann, C.; Treutterer, W.; Tröster, C.; Urso, L.; Vainonen-Ahlgren, E.; Varela, P.; Vermare, L.; Volpe, F.; Wagner, D.; Wigger, C.; Wischmeier, M.; Wolfrum, E.; Würsching, E.; Yadikin, D.; Yu, Q.; Zasche, D.; Zehetbauer, T.; Zilker, M.

    2009-10-01

    ASDEX Upgrade was operated with a fully W-covered wall in 2007 and 2008. Stationary H-modes at the ITER target values and improved H-modes with H up to 1.2 were run without any boronization. The boundary conditions set by the full W wall (high enough ELM frequency, high enough central heating and low enough power density arriving at the target plates) require significant scenario development, but will apply to ITER as well. D retention has been reduced and stationary operation with saturated wall conditions has been found. Concerning confinement, impurity ion transport across the pedestal is neoclassical, explaining the strong inward pinch of high-Z impurities in between ELMs. In improved H-mode, the width of the temperature pedestal increases with heating power, consistent with a \\beta_{pol,ped}^{1/2} scaling. In the area of MHD instabilities, disruption mitigation experiments using massive Ne injection reach volume averaged values of the total electron density close to those required for runaway suppression in ITER. ECRH at the q = 2 surface was successfully applied to delay density limit disruptions. The characterization of fast particle losses due to MHD has shown the importance of different loss mechanisms for NTMs, TAEs and also beta-induced Alfven eigenmodes (BAEs). Specific studies addressing the first ITER operational phase show that O1 ECRH at the HFS assists reliable low-voltage breakdown. During ramp-up, additional heating can be used to vary li to fit within the ITER range. Confinement and power threshold in He are more favourable than in H, suggesting that He operation could allow us to assess H-mode operation in the non-nuclear phase of ITER operation.

  1. Results of high heat flux tests of tungsten divertor targets under plasma heat loads expected in ITER and tokamaks (review)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budaev, V. P., E-mail: budaev@mail.ru

    2016-12-15

    Heat loads on the tungsten divertor targets in the ITER and the tokamak power reactors reach ~10MW m{sup −2} in the steady state of DT discharges, increasing to ~0.6–3.5 GW m{sup −2} under disruptions and ELMs. The results of high heat flux tests (HHFTs) of tungsten under such transient plasma heat loads are reviewed in the paper. The main attention is paid to description of the surface microstructure, recrystallization, and the morphology of the cracks on the target. Effects of melting, cracking of tungsten, drop erosion of the surface, and formation of corrugated and porous layers are observed. Production ofmore » submicron-sized tungsten dust and the effects of the inhomogeneous surface of tungsten on the plasma–wall interaction are discussed. In conclusion, the necessity of further HHFTs and investigations of the durability of tungsten under high pulsed plasma loads on the ITER divertor plates, including disruptions and ELMs, is stressed.« less

  2. Tungsten dust impact on ITER-like plasma edge

    DOE PAGES

    Smirnov, R. D.; Krasheninnikov, S. I.; Pigarov, A. Yu.; ...

    2015-01-12

    The impact of tungsten dust originating from divertor plates on the performance of edge plasma in ITER-like discharge is evaluated using computer modeling with the coupled dust-plasma transport code DUSTT-UEDGE. Different dust injection parameters, including dust size and mass injection rates, are surveyed. It is found that tungsten dust injection with rates as low as a few mg/s can lead to dangerously high tungsten impurity concentrations in the plasma core. Dust injections with rates of a few tens of mg/s are shown to have a significant effect on edge plasma parameters and dynamics in ITER scale tokamaks. The large impactmore » of certain phenomena, such as dust shielding by an ablation cloud and the thermal force on tungsten ions, on dust/impurity transport in edge plasma and consequently on core tungsten contamination level is demonstrated. Lastly, it is also found that high-Z impurities provided by dust can induce macroscopic self-sustained plasma oscillations in plasma edge leading to large temporal variations of edge plasma parameters and heat load to divertor target plates.« less

  3. Finite element analysis of heat load of tungsten relevant to ITER conditions

    NASA Astrophysics Data System (ADS)

    Zinovev, A.; Terentyev, D.; Delannay, L.

    2017-12-01

    A computational procedure is proposed in order to predict the initiation of intergranular cracks in tungsten with ITER specification microstructure (i.e. characterised by elongated micrometre-sized grains). Damage is caused by a cyclic heat load, which emerges from plasma instabilities during operation of thermonuclear devices. First, a macroscopic thermo-mechanical simulation is performed in order to obtain temperature- and strain field in the material. The strain path is recorded at a selected point of interest of the macroscopic specimen, and is then applied at the microscopic level to a finite element mesh of a polycrystal. In the microscopic simulation, the stress state at the grain boundaries serves as the marker of cracking initiation. The simulated heat load cycle is a representative of edge-localized modes, which are anticipated during normal operations of ITER. Normal stresses at the grain boundary interfaces were shown to strongly depend on the direction of grain orientation with respect to the heat flux direction and to attain higher values if the flux is perpendicular to the elongated grains, where it apparently promotes crack initiation.

  4. Comparison of adaptive statistical iterative and filtered back projection reconstruction techniques in quantifying coronary calcium.

    PubMed

    Takahashi, Masahiro; Kimura, Fumiko; Umezawa, Tatsuya; Watanabe, Yusuke; Ogawa, Harumi

    2016-01-01

    Adaptive statistical iterative reconstruction (ASIR) has been used to reduce radiation dose in cardiac computed tomography. However, change of image parameters by ASIR as compared to filtered back projection (FBP) may influence quantification of coronary calcium. To investigate the influence of ASIR on calcium quantification in comparison to FBP. In 352 patients, CT images were reconstructed using FBP alone, FBP combined with ASIR 30%, 50%, 70%, and ASIR 100% based on the same raw data. Image noise, plaque density, Agatston scores and calcium volumes were compared among the techniques. Image noise, Agatston score, and calcium volume decreased significantly with ASIR compared to FBP (each P < 0.001). Use of ASIR reduced Agatston score by 10.5% to 31.0%. In calcified plaques both of patients and a phantom, ASIR decreased maximum CT values and calcified plaque size. In comparison to FBP, adaptive statistical iterative reconstruction (ASIR) may significantly decrease Agatston scores and calcium volumes. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  5. GPU-accelerated iterative reconstruction for limited-data tomography in CBCT systems.

    PubMed

    de Molina, Claudia; Serrano, Estefania; Garcia-Blas, Javier; Carretero, Jesus; Desco, Manuel; Abella, Monica

    2018-05-15

    Standard cone-beam computed tomography (CBCT) involves the acquisition of at least 360 projections rotating through 360 degrees. Nevertheless, there are cases in which only a few projections can be taken in a limited angular span, such as during surgery, where rotation of the source-detector pair is limited to less than 180 degrees. Reconstruction of limited data with the conventional method proposed by Feldkamp, Davis and Kress (FDK) results in severe artifacts. Iterative methods may compensate for the lack of data by including additional prior information, although they imply a high computational burden and memory consumption. We present an accelerated implementation of an iterative method for CBCT following the Split Bregman formulation, which reduces computational time through GPU-accelerated kernels. The implementation enables the reconstruction of large volumes (>1024 3 pixels) using partitioning strategies in forward- and back-projection operations. We evaluated the algorithm on small-animal data for different scenarios with different numbers of projections, angular span, and projection size. Reconstruction time varied linearly with the number of projections and quadratically with projection size but remained almost unchanged with angular span. Forward- and back-projection operations represent 60% of the total computational burden. Efficient implementation using parallel processing and large-memory management strategies together with GPU kernels enables the use of advanced reconstruction approaches which are needed in limited-data scenarios. Our GPU implementation showed a significant time reduction (up to 48 ×) compared to a CPU-only implementation, resulting in a total reconstruction time from several hours to few minutes.

  6. Dust measurements in tokamaks (invited).

    PubMed

    Rudakov, D L; Yu, J H; Boedo, J A; Hollmann, E M; Krasheninnikov, S I; Moyer, R A; Muller, S H; Pigarov, A Yu; Rosenberg, M; Smirnov, R D; West, W P; Boivin, R L; Bray, B D; Brooks, N H; Hyatt, A W; Wong, C P C; Roquemore, A L; Skinner, C H; Solomon, W M; Ratynskaia, S; Fenstermacher, M E; Groth, M; Lasnier, C J; McLean, A G; Stangeby, P C

    2008-10-01

    Dust production and accumulation present potential safety and operational issues for the ITER. Dust diagnostics can be divided into two groups: diagnostics of dust on surfaces and diagnostics of dust in plasma. Diagnostics from both groups are employed in contemporary tokamaks; new diagnostics suitable for ITER are also being developed and tested. Dust accumulation in ITER is likely to occur in hidden areas, e.g., between tiles and under divertor baffles. A novel electrostatic dust detector for monitoring dust in these regions has been developed and tested at PPPL. In the DIII-D tokamak dust diagnostics include Mie scattering from Nd:YAG lasers, visible imaging, and spectroscopy. Laser scattering is able to resolve particles between 0.16 and 1.6 microm in diameter; using these data the total dust content in the edge plasmas and trends in the dust production rates within this size range have been established. Individual dust particles are observed by visible imaging using fast framing cameras, detecting dust particles of a few microns in diameter and larger. Dust velocities and trajectories can be determined in two-dimension with a single camera or three-dimension using multiple cameras, but determination of particle size is challenging. In order to calibrate diagnostics and benchmark dust dynamics modeling, precharacterized carbon dust has been injected into the lower divertor of DIII-D. Injected dust is seen by cameras, and spectroscopic diagnostics observe an increase in carbon line (CI, CII, C(2) dimer) and thermal continuum emissions from the injected dust. The latter observation can be used in the design of novel dust survey diagnostics.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rudakov, D. L.; Yu, J. H.; Boedo, J. A.

    Dust production and accumulation present potential safety and operational issues for the ITER. Dust diagnostics can be divided into two groups: diagnostics of dust on surfaces and diagnostics of dust in plasma. Diagnostics from both groups are employed in contemporary tokamaks; new diagnostics suitable for ITER are also being developed and tested. Dust accumulation in ITER is likely to occur in hidden areas, e.g., between tiles and under divertor baffles. A novel electrostatic dust detector for monitoring dust in these regions has been developed and tested at PPPL. In the DIII-D tokamak dust diagnostics include Mie scattering from Nd:YAG lasers,more » visible imaging, and spectroscopy. Laser scattering is able to resolve particles between 0.16 and 1.6 {mu}m in diameter; using these data the total dust content in the edge plasmas and trends in the dust production rates within this size range have been established. Individual dust particles are observed by visible imaging using fast framing cameras, detecting dust particles of a few microns in diameter and larger. Dust velocities and trajectories can be determined in two-dimension with a single camera or three-dimension using multiple cameras, but determination of particle size is challenging. In order to calibrate diagnostics and benchmark dust dynamics modeling, precharacterized carbon dust has been injected into the lower divertor of DIII-D. Injected dust is seen by cameras, and spectroscopic diagnostics observe an increase in carbon line (CI, CII, C{sub 2} dimer) and thermal continuum emissions from the injected dust. The latter observation can be used in the design of novel dust survey diagnostics.« less

  8. Plasma cleaning of ITER edge Thomson scattering mock-up mirror in the EAST tokamak

    NASA Astrophysics Data System (ADS)

    Yan, Rong; Moser, Lucas; Wang, Baoguo; Peng, Jiao; Vorpahl, Christian; Leipold, Frank; Reichle, Roger; Ding, Rui; Chen, Junling; Mu, Lei; Steiner, Roland; Meyer, Ernst; Zhao, Mingzhong; Wu, Jinhua; Marot, Laurent

    2018-02-01

    First mirrors are the key element of all optical and laser diagnostics in ITER. Facing the plasma directly, the surface of the first mirrors could be sputtered by energetic particles or deposited with contaminants eroded from the first wall (tungsten and beryllium), which would result in the degradation of the reflectivity. The impurity deposits emphasize the necessity of the first mirror in situ cleaning for ITER. The mock-up first mirror system for ITER edge Thomson scattering diagnostics has been cleaned in EAST for the first time in a tokamak using radio frequency capacitively coupled plasma. The cleaning properties, namely the removal of contaminants and homogeneity of cleaning were investigated with molybdenum mirror insets (25 mm diameter) located at five positions over the mock-up plate (center to edge) on which 10 nm of aluminum oxide, used as beryllium proxy, were deposited. The cleaning efficiency was evaluated using energy dispersive x-ray spectroscopy, reflectivity measurements and x-ray photoelectron spectroscopy. Using argon or neon plasma without magnetic field in the laboratory and with a 1.7 T magnetic field in the EAST tokamak, the aluminum oxide films were homogeneously removed. The full recovery of the mirrors’ reflectivity was attained after cleaning in EAST with the magnetic field, and the cleaning efficiency was about 40 times higher than that without the magnetic field. All these results are promising for the plasma cleaning baseline scenario of ITER.

  9. Fine‐resolution conservation planning with limited climate‐change information

    USGS Publications Warehouse

    Shah, Payal; Mallory, Mindy L.; Ando , Amy W.; Guntenspergen, Glenn R.

    2017-01-01

    Climate‐change induced uncertainties in future spatial patterns of conservation‐related outcomes make it difficult to implement standard conservation‐planning paradigms. A recent study translates Markowitz's risk‐diversification strategy from finance to conservation settings, enabling conservation agents to use this diversification strategy for allocating conservation and restoration investments across space to minimize the risk associated with such uncertainty. However, this method is information intensive and requires a large number of forecasts of ecological outcomes associated with possible climate‐change scenarios for carrying out fine‐resolution conservation planning. We developed a technique for iterative, spatial portfolio analysis that can be used to allocate scarce conservation resources across a desired level of subregions in a planning landscape in the absence of a sufficient number of ecological forecasts. We applied our technique to the Prairie Pothole Region in central North America. A lack of sufficient future climate information prevented attainment of the most efficient risk‐return conservation outcomes in the Prairie Pothole Region. The difference in expected conservation returns between conservation planning with limited climate‐change information and full climate‐change information was as large as 30% for the Prairie Pothole Region even when the most efficient iterative approach was used. However, our iterative approach allowed finer resolution portfolio allocation with limited climate‐change forecasts such that the best possible risk‐return combinations were obtained. With our most efficient iterative approach, the expected loss in conservation outcomes owing to limited climate‐change information could be reduced by 17% relative to other iterative approaches.

  10. Status of the ITER Electron Cyclotron Heating and Current Drive System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darbos, Caroline; Albajar, Ferran; Bonicelli, Tullio

    2015-10-07

    We present that the electron cyclotron (EC) heating and current drive (H&CD) system developed for the ITER is made of 12 sets of high-voltage power supplies feeding 24 gyrotrons connected through 24 transmission lines (TL), to five launchers, four located in upper ports and one at the equatorial level. Nearly all procurements are in-kind, following general ITER philosophy, and will come from Europe, India, Japan, Russia and the USA. The full system is designed to couple to the plasma 20 MW among the 24 MW generated power, at the frequency of 170 GHz, for various physics applications such as plasmamore » start-up, central H&CD and magnetohydrodynamic (MHD) activity control. The design takes present day technology and extends toward high-power continuous operation, which represents a large step forward as compared to the present state of the art. The ITER EC system will be a stepping stone to future EC systems for DEMO and beyond.The development of the EC system is facing significant challenges, which includes not only an advanced microwave system but also compliance with stringent requirements associated with nuclear safety as ITER became the first fusion device licensed as basic nuclear installations as of 9 November 2012. Finally, since the conceptual design of the EC system was established in 2007, the EC system has progressed to a preliminary design stage in 2012 and is now moving forward toward a final design.« less

  11. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    NASA Astrophysics Data System (ADS)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  12. Calculation of the angular radiance distribution for a coupled atmosphere and canopy

    NASA Technical Reports Server (NTRS)

    Liang, Shunlin; Strahler, Alan H.

    1993-01-01

    The radiative transfer equations for a coupled atmosphere and canopy are solved numerically by an improved Gauss-Seidel iteration algorithm. The radiation field is decomposed into three components: unscattered sunlight, single scattering, and multiple scattering radiance for which the corresponding equations and boundary conditions are set up and their analytical or iterational solutions are explicitly derived. The classic Gauss-Seidel algorithm has been widely applied in atmospheric research. This is its first application for calculating the multiple scattering radiance of a coupled atmosphere and canopy. This algorithm enables us to obtain the internal radiation field as well as radiances at boundaries. Any form of bidirectional reflectance distribution function (BRDF) as a boundary condition can be easily incorporated into the iteration procedure. The hotspot effect of the canopy is accommodated by means of the modification of the extinction coefficients of upward single scattering radiation and unscattered sunlight using the formulation of Nilson and Kuusk. To reduce the computation for the case of large optical thickness, an improved iteration formula is derived to speed convergence. The upwelling radiances have been evaluated for different atmospheric conditions, leaf area index (LAI), leaf angle distribution (LAD), leaf size and so on. The formulation presented in this paper is also well suited to analyze the relative magnitude of multiple scattering radiance and single scattering radiance in both the visible and near infrared regions.

  13. Sub-scale Inverse Wind Turbine Blade Design Using Bound Circulation

    NASA Astrophysics Data System (ADS)

    Kelley, Christopher; Berg, Jonathan

    2014-11-01

    A goal of the National Rotor Testbed project at Sandia is to design a sub-scale wind turbine blade that has similitude to a modern, commercial size blade. However, a smaller diameter wind turbine operating at the same tip-speed-ratio exhibits a different range of operating Reynolds numbers across the blade span, thus changing the local lift and drag coefficients. Differences to load distribution also affect the wake dynamics and stability. An inverse wind turbine blade design tool has been implemented which uses a target, dimensionless circulation distribution from a full-scale blade to find the chord and twist along a sub-scale blade. In addition, airfoil polar data are interpolated from a few specified span stations leading to a smooth, manufacturable blade. The iterative process perturbs chord and twist, after running a blade element momentum theory code, to reduce the residual sum of the squares between the modeled sub-scale circulation and the target full-scale circulation. It is shown that the converged sub-scale design also leads to performance similarity in thrust and power coefficients. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy under Contract DE-AC04-94AL85000.

  14. Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Sharp, Gregory C.; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges

    2016-05-01

    It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.

  15. Metallographic autopsies of full-scale ITER prototype cable-in-conduit conductors after full cyclic testing in SULTAN: III. The importance of strand surface roughness in long twist pitch conductors

    DOE PAGES

    Sanabria, Charlie; Lee, Peter J.; Starch, William; ...

    2016-05-31

    As part of the ITER conductor qualification process, 3 m long Cable-in-Conduit Conductors (CICCs) were tested at the SULTAN facility under conditions simulating ITER operation so as to establish the current sharing temperature, T cs, as a function of multiple full Lorentz force loading cycles. After a comprehensive evaluation of both the Toroidal Field (TF) and the Central Solenoid (CS) conductors, it was found that T cs degradation was common in long twist pitch TF conductors while short twist pitch CS conductors showed some T cs increase. However, one kind of TF conductors containing superconducting strand fabricated by the Bochvarmore » Institute of Inorganic Materials (VNIINM) avoided T cs degradation despite having long twist pitch. In our earlier metallographic autopsies of long and short twist pitch CS conductors, we observed a substantially greater transverse strand movement under Lorentz force loading for long twist pitch conductors, while short twist pitch conductors had negligible transverse movement. With help from the literature, we concluded that the transverse movement was not the source of T cs degradation but rather an increase of the compressive strain in the Nb 3Sn filaments possibly induced by longitudinal movement of the wires. Like all TF conductors this TF VNIINM conductor showed large transverse motions under Lorentz force loading, but Tcs actually increased, as in all short twist pitch CS conductors. We here propose that the high surface roughness of the VNIINM strand may be responsible for the suppression of the compressive strain enhancement (characteristic of long twist pitch conductors). Furthermore, it appears that increasing strand surface roughness could improve the performance of long twist pitch CICCs.« less

  16. Can use of adaptive statistical iterative reconstruction reduce radiation dose in unenhanced head CT? An analysis of qualitative and quantitative image quality

    PubMed Central

    Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T

    2016-01-01

    Background Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. Purpose To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Material and Methods Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor’s water phantom. Results There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between −3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and −7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. Conclusion There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality. PMID:27583169

  17. Can use of adaptive statistical iterative reconstruction reduce radiation dose in unenhanced head CT? An analysis of qualitative and quantitative image quality.

    PubMed

    Østerås, Bjørn Helge; Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T

    2016-08-01

    Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor's water phantom. There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between -3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and -7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality.

  18. Metallographic autopsies of full-scale ITER prototype cable-in-conduit conductors after full cyclic testing in SULTAN: III. The importance of strand surface roughness in long twist pitch conductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanabria, Charlie; Lee, Peter J.; Starch, William

    As part of the ITER conductor qualification process, 3 m long Cable-in-Conduit Conductors (CICCs) were tested at the SULTAN facility under conditions simulating ITER operation so as to establish the current sharing temperature, T cs, as a function of multiple full Lorentz force loading cycles. After a comprehensive evaluation of both the Toroidal Field (TF) and the Central Solenoid (CS) conductors, it was found that T cs degradation was common in long twist pitch TF conductors while short twist pitch CS conductors showed some T cs increase. However, one kind of TF conductors containing superconducting strand fabricated by the Bochvarmore » Institute of Inorganic Materials (VNIINM) avoided T cs degradation despite having long twist pitch. In our earlier metallographic autopsies of long and short twist pitch CS conductors, we observed a substantially greater transverse strand movement under Lorentz force loading for long twist pitch conductors, while short twist pitch conductors had negligible transverse movement. With help from the literature, we concluded that the transverse movement was not the source of T cs degradation but rather an increase of the compressive strain in the Nb 3Sn filaments possibly induced by longitudinal movement of the wires. Like all TF conductors this TF VNIINM conductor showed large transverse motions under Lorentz force loading, but Tcs actually increased, as in all short twist pitch CS conductors. We here propose that the high surface roughness of the VNIINM strand may be responsible for the suppression of the compressive strain enhancement (characteristic of long twist pitch conductors). Furthermore, it appears that increasing strand surface roughness could improve the performance of long twist pitch CICCs.« less

  19. Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy.

    PubMed

    Bian, Junguo; Sharp, Gregory C; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges

    2016-05-07

    It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.

  20. Full genome virus detection in fecal samples using sensitive nucleic acid preparation, deep sequencing, and a novel iterative sequence classification algorithm.

    PubMed

    Cotten, Matthew; Oude Munnink, Bas; Canuti, Marta; Deijs, Martin; Watson, Simon J; Kellam, Paul; van der Hoek, Lia

    2014-01-01

    We have developed a full genome virus detection process that combines sensitive nucleic acid preparation optimised for virus identification in fecal material with Illumina MiSeq sequencing and a novel post-sequencing virus identification algorithm. Enriched viral nucleic acid was converted to double-stranded DNA and subjected to Illumina MiSeq sequencing. The resulting short reads were processed with a novel iterative Python algorithm SLIM for the identification of sequences with homology to known viruses. De novo assembly was then used to generate full viral genomes. The sensitivity of this process was demonstrated with a set of fecal samples from HIV-1 infected patients. A quantitative assessment of the mammalian, plant, and bacterial virus content of this compartment was generated and the deep sequencing data were sufficient to assembly 12 complete viral genomes from 6 virus families. The method detected high levels of enteropathic viruses that are normally controlled in healthy adults, but may be involved in the pathogenesis of HIV-1 infection and will provide a powerful tool for virus detection and for analyzing changes in the fecal virome associated with HIV-1 progression and pathogenesis.

  1. Full Genome Virus Detection in Fecal Samples Using Sensitive Nucleic Acid Preparation, Deep Sequencing, and a Novel Iterative Sequence Classification Algorithm

    PubMed Central

    Cotten, Matthew; Oude Munnink, Bas; Canuti, Marta; Deijs, Martin; Watson, Simon J.; Kellam, Paul; van der Hoek, Lia

    2014-01-01

    We have developed a full genome virus detection process that combines sensitive nucleic acid preparation optimised for virus identification in fecal material with Illumina MiSeq sequencing and a novel post-sequencing virus identification algorithm. Enriched viral nucleic acid was converted to double-stranded DNA and subjected to Illumina MiSeq sequencing. The resulting short reads were processed with a novel iterative Python algorithm SLIM for the identification of sequences with homology to known viruses. De novo assembly was then used to generate full viral genomes. The sensitivity of this process was demonstrated with a set of fecal samples from HIV-1 infected patients. A quantitative assessment of the mammalian, plant, and bacterial virus content of this compartment was generated and the deep sequencing data were sufficient to assembly 12 complete viral genomes from 6 virus families. The method detected high levels of enteropathic viruses that are normally controlled in healthy adults, but may be involved in the pathogenesis of HIV-1 infection and will provide a powerful tool for virus detection and for analyzing changes in the fecal virome associated with HIV-1 progression and pathogenesis. PMID:24695106

  2. A fast, time-accurate unsteady full potential scheme

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Ide, H.; Gorski, J.; Osher, S.

    1985-01-01

    The unsteady form of the full potential equation is solved in conservation form by an implicit method based on approximate factorization. At each time level, internal Newton iterations are performed to achieve time accuracy and computational efficiency. A local time linearization procedure is introduced to provide a good initial guess for the Newton iteration. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi, obtained by imposing the density to be continuous across the wake. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. The resulting unsteady method performs well which, even at low reduced frequency levels of 0.1 or less, requires fewer than 100 time steps per cycle at transonic Mach numbers. The code is fully vectorized for the CRAY-XMP and the VPS-32 computers.

  3. Assessment of the measurement performance of the in-vessel system of gap 6 of the ITER plasma position reflectometer using a finite-difference time-domain Maxwell full-wave code.

    PubMed

    da Silva, F; Heuraux, S; Ricardo, E; Quental, P; Ferreira, J

    2016-11-01

    We conducted a first assessment of the measurement performance of the in-vessel components at gap 6 of the ITER plasma position reflectometry with the aid of a synthetic Ordinary Mode (O-mode) broadband frequency-modulated continuous-wave reflectometer implemented with REFMUL, a 2D finite-difference time-domain full-wave Maxwell code. These simulations take into account the system location within the vacuum vessel as well as its access to the plasma. The plasma case considered is a baseline scenario from Fusion for Energy. We concluded that for the analyzed scenario, (i) the plasma curvature and non-equatorial position of the antenna have neglectable impact on the measurements; (ii) the cavity-like space surrounding the antenna can cause deflection and splitting of the probing beam; and (iii) multi-reflections on the blanket wall cause a substantial error preventing the system from operating within the required error margin.

  4. Full waveform inversion of combined towed streamer and limited OBS seismic data: a theoretical study

    NASA Astrophysics Data System (ADS)

    Yang, Huachen; Zhang, Jianzhong

    2018-06-01

    In marine seismic oil exploration, full waveform inversion (FWI) of towed-streamer data is used to reconstruct velocity models. However, the FWI of towed-streamer data easily converges to a local minimum solution due to the lack of low-frequency content. In this paper, we propose a new FWI technique using towed-streamer data, its integrated data sets and limited OBS data. Both integrated towed-streamer seismic data and OBS data have low-frequency components. Therefore, at early iterations in the new FWI technique, the OBS data combined with the integrated towed-streamer data sets reconstruct an appropriate background model. And the towed-streamer seismic data play a major role in later iterations to improve the resolution of the model. The new FWI technique is tested on numerical examples. The results show that when starting models are not accurate enough, the models inverted using the new FWI technique are superior to those inverted using conventional FWI.

  5. Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.

    PubMed

    Dastmalchi, Pouya; Veronis, Georgios

    2013-12-30

    We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.

  6. Noise effect in an improved conjugate gradient algorithm to invert particle size distribution and the algorithm amendment.

    PubMed

    Wei, Yongjie; Ge, Baozhen; Wei, Yaolin

    2009-03-20

    In general, model-independent algorithms are sensitive to noise during laser particle size measurement. An improved conjugate gradient algorithm (ICGA) that can be used to invert particle size distribution (PSD) from diffraction data is presented. By use of the ICGA to invert simulated data with multiplicative or additive noise, we determined that additive noise is the main factor that induces distorted results. Thus the ICGA is amended by introduction of an iteration step-adjusting parameter and is used experimentally on simulated data and some samples. The experimental results show that the sensitivity of the ICGA to noise is reduced and the inverted results are in accord with the real PSD.

  7. An improved VSS NLMS algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan

    2017-08-01

    In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.

  8. Progress on the application of ELM control schemes to ITER scenarios from the non-active phase to DT operation

    NASA Astrophysics Data System (ADS)

    Loarte, A.; Huijsmans, G.; Futatani, S.; Baylor, L. R.; Evans, T. E.; Orlov, D. M.; Schmitz, O.; Becoulet, M.; Cahyna, P.; Gribov, Y.; Kavin, A.; Sashala Naik, A.; Campbell, D. J.; Casper, T.; Daly, E.; Frerichs, H.; Kischner, A.; Laengner, R.; Lisgo, S.; Pitts, R. A.; Saibene, G.; Wingen, A.

    2014-03-01

    Progress in the definition of the requirements for edge localized mode (ELM) control and the application of ELM control methods both for high fusion performance DT operation and non-active low-current operation in ITER is described. Evaluation of the power fluxes for low plasma current H-modes in ITER shows that uncontrolled ELMs will not lead to damage to the tungsten (W) divertor target, unlike for high-current H-modes in which divertor damage by uncontrolled ELMs is expected. Despite the lack of divertor damage at lower currents, ELM control is found to be required in ITER under these conditions to prevent an excessive contamination of the plasma by W, which could eventually lead to an increased disruptivity. Modelling with the non-linear MHD code JOREK of the physics processes determining the flow of energy from the confined plasma onto the plasma-facing components during ELMs at the ITER scale shows that the relative contribution of conductive and convective losses is intrinsically linked to the magnitude of the ELM energy loss. Modelling of the triggering of ELMs by pellet injection for DIII-D and ITER has identified the minimum pellet size required to trigger ELMs and, from this, the required fuel throughput for the application of this technique to ITER is evaluated and shown to be compatible with the installed fuelling and tritium re-processing capabilities in ITER. The evaluation of the capabilities of the ELM control coil system in ITER for ELM suppression is carried out (in the vacuum approximation) and found to have a factor of ˜2 margin in terms of coil current to achieve its design criterion, although such a margin could be substantially reduced when plasma shielding effects are taken into account. The consequences for the spatial distribution of the power fluxes at the divertor of ELM control by three-dimensional (3D) fields are evaluated and found to lead to substantial toroidal asymmetries in zones of the divertor target away from the separatrix. Therefore, specifications for the rotation of the 3D perturbation applied for ELM control in order to avoid excessive localized erosion of the ITER divertor target are derived. It is shown that a rotation frequency in excess of 1 Hz for the whole toroidally asymmetric divertor power flux pattern is required (corresponding to n Hz frequency in the variation of currents in the coils, where n is the toroidal symmetry of the perturbation applied) in order to avoid unacceptable thermal cycling of the divertor target for the highest power fluxes and worst toroidal power flux asymmetries expected. The possible use of the in-vessel vertical stability coils for ELM control as a back-up to the main ELM control systems in ITER is described and the feasibility of its application to control ELMs in low plasma current H-modes, foreseen for initial ITER operation, is evaluated and found to be viable for plasma currents up to 5-10 MA depending on modelling assumptions.

  9. Full-wave Moment Tensor and Tomographic Inversions Based on 3D Strain Green Tensor

    DTIC Science & Technology

    2010-01-31

    propagation in three-dimensional (3D) earth, linearizes the inverse problem by iteratively updating the earth model , and provides an accurate way to...self-consistent FD-SGT databases constructed from finite-difference simulations of wave propagation in full-wave tomographic models can be used to...determine the moment tensors within minutes after a seismic event, making it possible for real time monitoring using 3D models . 15. SUBJECT TERMS

  10. Low pressure and high power rf sources for negative hydrogen ions for fusion applications (ITER neutral beam injection).

    PubMed

    Fantz, U; Franzen, P; Kraus, W; Falter, H D; Berger, M; Christ-Koch, S; Fröschle, M; Gutser, R; Heinemann, B; Martens, C; McNeely, P; Riedl, R; Speth, E; Wünderlich, D

    2008-02-01

    The international fusion experiment ITER requires for the plasma heating and current drive a neutral beam injection system based on negative hydrogen ion sources at 0.3 Pa. The ion source must deliver a current of 40 A D(-) for up to 1 h with an accelerated current density of 200 Am/(2) and a ratio of coextracted electrons to ions below 1. The extraction area is 0.2 m(2) from an aperture array with an envelope of 1.5 x 0.6 m(2). A high power rf-driven negative ion source has been successfully developed at the Max-Planck Institute for Plasma Physics (IPP) at three test facilities in parallel. Current densities of 330 and 230 Am/(2) have been achieved for hydrogen and deuterium, respectively, at a pressure of 0.3 Pa and an electron/ion ratio below 1 for a small extraction area (0.007 m(2)) and short pulses (<4 s). In the long pulse experiment, equipped with an extraction area of 0.02 m(2), the pulse length has been extended to 3600 s. A large rf source, with the width and half the height of the ITER source but without extraction system, is intended to demonstrate the size scaling and plasma homogeneity of rf ion sources. The source operates routinely now. First results on plasma homogeneity obtained from optical emission spectroscopy and Langmuir probes are very promising. Based on the success of the IPP development program, the high power rf-driven negative ion source has been chosen recently for the ITER beam systems in the ITER design review process.

  11. Solution of elliptic partial differential equations by fast Poisson solvers using a local relaxation factor. 1: One-step method

    NASA Technical Reports Server (NTRS)

    Chang, S. C.

    1986-01-01

    An algorithm for solving a large class of two- and three-dimensional nonseparable elliptic partial differential equations (PDE's) is developed and tested. It uses a modified D'Yakanov-Gunn iterative procedure in which the relaxation factor is grid-point dependent. It is easy to implement and applicable to a variety of boundary conditions. It is also computationally efficient, as indicated by the results of numerical comparisons with other established methods. Furthermore, the current algorithm has the advantage of possessing two important properties which the traditional iterative methods lack; that is: (1) the convergence rate is relatively insensitive to grid-cell size and aspect ratio, and (2) the convergence rate can be easily estimated by using the coefficient of the PDE being solved.

  12. RF Pulse Design using Nonlinear Gradient Magnetic Fields

    PubMed Central

    Kopanoglu, Emre; Constable, R. Todd

    2014-01-01

    Purpose An iterative k-space trajectory and radio-frequency (RF) pulse design method is proposed for Excitation using Nonlinear Gradient Magnetic fields (ENiGMa). Theory and Methods The spatial encoding functions (SEFs) generated by nonlinear gradient fields (NLGFs) are linearly dependent in Cartesian-coordinates. Left uncorrected, this may lead to flip-angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a Matching-Pursuit algorithm, and the RF pulse is designed using a Conjugate-Gradient algorithm. Three variants of the proposed approach are given: the full-algorithm, a computationally-cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. Results The method is compared to other iterative (Matching-Pursuit and Conjugate Gradient) and non-iterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity significantly. Conclusion An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. PMID:25203286

  13. Design of the ITER Electron Cyclotron Heating and Current Drive Waveguide Transmission Line

    NASA Astrophysics Data System (ADS)

    Bigelow, T. S.; Rasmussen, D. A.; Shapiro, M. A.; Sirigiri, J. R.; Temkin, R. J.; Grunloh, H.; Koliner, J.

    2007-11-01

    The ITER ECH transmission line system is designed to deliver the power, from twenty-four 1 MW 170 GHz gyrotrons and three 1 MW 127.5 GHz gyrotrons, to the equatorial and upper launchers. The performance requirements, initial design of components and layout between the gyrotrons and the launchers is underway. Similar 63.5 mm ID corrugated waveguide systems have been built and installed on several fusion experiments; however, none have operated at the high frequency and long-pulse required for ITER. Prototype components are being tested at low power to estimate ohmic and mode conversion losses. In order to develop and qualify the ITER components prior to procurement of the full set of 24 transmission lines, a 170 GHz high power test of a complete prototype transmission line is planned. Testing of the transmission line at 1-2 MW can be performed with a modest power (˜0.5 MW) tube with a low loss (10-20%) resonant ring configuration. A 140 GHz long pulse, 400 kW gyrotron will be used in the initial tests and a 170 GHz gyrotron will be used when it becomes available. Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U.S. Dept. of Energy under contract DE-AC05-00OR22725.

  14. Experiments and Simulations of ITER-like Plasmas in Alcator C-Mod

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    .R. Wilson, C.E. Kessel, S. Wolfe, I.H. Hutchinson, P. Bonoli, C. Fiore, A.E. Hubbard, J. Hughes, Y. Lin, Y. Ma, D. Mikkelsen, M. Reinke, S. Scott, A.C.C. Sips, S. Wukitch and the C-Mod Team

    Alcator C-Mod is performing ITER-like experiments to benchmark and verify projections to 15 MA ELMy H-mode Inductive ITER discharges. The main focus has been on the transient ramp phases. The plasma current in C-Mod is 1.3 MA and toroidal field is 5.4 T. Both Ohmic and ion cyclotron (ICRF) heated discharges are examined. Plasma current rampup experiments have demonstrated that (ICRF and LH) heating in the rise phase can save voltseconds (V-s), as was predicted for ITER by simulations, but showed that the ICRF had no effect on the current profile versus Ohmic discharges. Rampdown experiments show an overcurrent inmore » the Ohmic coil (OH) at the H to L transition, which can be mitigated by remaining in H-mode into the rampdown. Experiments have shown that when the EDA H-mode is preserved well into the rampdown phase, the density and temperature pedestal heights decrease during the plasma current rampdown. Simulations of the full C-Mod discharges have been done with the Tokamak Simulation Code (TSC) and the Coppi-Tang energy transport model is used with modified settings to provide the best fit to the experimental electron temperature profile. Other transport models have been examined also. __________________________________________________« less

  15. Resizing procedure for structures under combined mechanical and thermal loading

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Narayanaswami, R.

    1976-01-01

    The fully-stressed design (FSD) appears to be the most widely used approach for sizing of flight structures under strength and minimum-gage constraints. Almost all of the experience with FSD has been with structures primarily under mechanical loading as opposed to thermal loading. In this method the structural sizes are iterated with the step size, depending on the ratio of the total stress to the allowable stress. In this paper, the thermal fully-stressed design (TFSD) procedure developed for problems involving substantial thermal stress is extended to biaxial stress members using a Von Mises failure criterion. The TFSD resizing procedure for uniaxial stress is restated and the new procedure for biaxial stress members is developed. Results are presented for an application of the two procedures to size a simplified wing structure.

  16. Atmospheric particulate analysis using angular light scattering

    NASA Technical Reports Server (NTRS)

    Hansen, M. Z.

    1980-01-01

    Using the light scattering matrix elements measured by a polar nephelometer, a procedure for estimating the characteristics of atmospheric particulates was developed. A theoretical library data set of scattering matrices derived from Mie theory was tabulated for a range of values of the size parameter and refractive index typical of atmospheric particles. Integration over the size parameter yielded the scattering matrix elements for a variety of hypothesized particulate size distributions. A least squares curve fitting technique was used to find a best fit from the library data for the experimental measurements. This was used as a first guess for a nonlinear iterative inversion of the size distributions. A real index of 1.50 and an imaginary index of -0.005 are representative of the smoothed inversion results for the near ground level atmospheric aerosol in Tucson.

  17. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the standard algorithm. When the utmost accuracy must be achieved, the modified algorithm extracts atoms more conservatively but still exhibits computational gains over classical MPD. The MPD++ algorithm was demonstrated using an over-complete dictionary on real life data. Computational times were reduced by factors of 1.9 and 44 for the emphases of accuracy and performance, respectively. The modified algorithm extracted similar amounts of energy compared to classical MPD. The degree of the improvement in computational time depends on the complexity of the data, the initialization parameters, and the breadth of the dictionary. The results of the research confirm that the three modifications successfully improved the scalability and computational efficiency of the MPD algorithm. Correlation Thresholding decreased the time complexity by reducing the dictionary size. Multiple Atom Extraction also reduced the time complexity by decreasing the number of iterations required for a stopping criterion to be reached. The Course-Fine Grids technique enabled complicated atoms with numerous variable parameters to be effectively represented in the dictionary. Due to the nature of the three proposed modifications, they are capable of being stacked and have cumulative effects on the reduction of the time complexity.

  18. Iterative approach of dual regression with a sparse prior enhances the performance of independent component analysis for group functional magnetic resonance imaging (fMRI) data.

    PubMed

    Kim, Yong-Hwan; Kim, Junghoe; Lee, Jong-Hwan

    2012-12-01

    This study proposes an iterative dual-regression (DR) approach with sparse prior regularization to better estimate an individual's neuronal activation using the results of an independent component analysis (ICA) method applied to a temporally concatenated group of functional magnetic resonance imaging (fMRI) data (i.e., Tc-GICA method). An ordinary DR approach estimates the spatial patterns (SPs) of neuronal activation and corresponding time courses (TCs) specific to each individual's fMRI data with two steps involving least-squares (LS) solutions. Our proposed approach employs iterative LS solutions to refine both the individual SPs and TCs with an additional a priori assumption of sparseness in the SPs (i.e., minimally overlapping SPs) based on L(1)-norm minimization. To quantitatively evaluate the performance of this approach, semi-artificial fMRI data were created from resting-state fMRI data with the following considerations: (1) an artificially designed spatial layout of neuronal activation patterns with varying overlap sizes across subjects and (2) a BOLD time series (TS) with variable parameters such as onset time, duration, and maximum BOLD levels. To systematically control the spatial layout variability of neuronal activation patterns across the "subjects" (n=12), the degree of spatial overlap across all subjects was varied from a minimum of 1 voxel (i.e., 0.5-voxel cubic radius) to a maximum of 81 voxels (i.e., 2.5-voxel radius) across the task-related SPs with a size of 100 voxels for both the block-based and event-related task paradigms. In addition, several levels of maximum percentage BOLD intensity (i.e., 0.5, 1.0, 2.0, and 3.0%) were used for each degree of spatial overlap size. From the results, the estimated individual SPs of neuronal activation obtained from the proposed iterative DR approach with a sparse prior showed an enhanced true positive rate and reduced false positive rate compared to the ordinary DR approach. The estimated TCs of the task-related SPs from our proposed approach showed greater temporal correlation coefficients with a reference hemodynamic response function than those of the ordinary DR approach. Moreover, the efficacy of the proposed DR approach was also successfully demonstrated by the results of real fMRI data acquired from left-/right-hand clenching tasks in both block-based and event-related task paradigms. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. NOAA GOES Geostationary Satellite Server

    Science.gov Websites

    Size West CONUS IR Image MPEG | Loop Visible Full Size West CONUS VIS Image MPEG | Loop Water Vapor Full Size West Conus WV Image MPEG | Loop Alaska Infrared Full Size Alaska IR Image Loop | Color Infrared Full Size Hawaii IR Image Loop | Color Visible Full Size Hawaii VIS Image Loop Water Vapor Full

  20. 77 FR 22564 - Proposed Collection; Comment Request; Safety Standards for Full-Size Baby Cribs and Non-Full-Size...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-16

    ... child care centers' compliance with the recent CPSC safety standards for full-size and non-full-size... CONSUMER PRODUCT SAFETY COMMISSION [Docket No. CPSC-2012-0019] Proposed Collection; Comment Request; Safety Standards for Full- Size Baby Cribs and Non-Full-Size Baby Cribs; Compliance Form AGENCY...

  1. Extraction of a Weak Co-Channel Interfering Communication Signal Using Complex Independent Component Analysis

    DTIC Science & Technology

    2013-06-01

    zarzoso/ biblio /tnn10.pdf"> % "Robust independent component analysis by iterative maximization</a> % <a href = "http://www.i3s.unice.fr/~zarzoso... biblio /tnn10.pdf"> % of the kurtosis contrast with algebraic optimal step size"</a>, % IEEE Transactions on Neural Networks, vol. 21, no. 2, % pp

  2. Iterative optimizing quantization method for reconstructing three-dimensional images from a limited number of views

    DOEpatents

    Lee, Heung-Rae

    1997-01-01

    A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object.

  3. Three-dimensional forward modeling of DC resistivity using the aggregation-based algebraic multigrid method

    NASA Astrophysics Data System (ADS)

    Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu

    2017-03-01

    To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.

  4. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Retrieving optical constants of glasses with variable iron abundance

    NASA Astrophysics Data System (ADS)

    Carli, C.; Roush, T. L.; Capaccioni, F.; Baraldi, A.

    2013-12-01

    Visible and Near Infrared (VNIR, ~0.4-2.5 μm) spectroscopy is an important tool to explore the surface composition of objects in our Solar System. Using this technique different minerals have been recognized on the surfaces of solar system bodies. One of the principal products of extrusive volcanism and impact cratering is a glassy component, that can be abundant and thus significantly influence the spectral signature of the region investigated. Different types of glasses have been proposed and identified on the lunar surface and in star forming regions near young stellar objects. Here we report an initial effort of retrieving the optical constants of volcanic glasses formed in oxidizing terrestrial-like conditions. We also investigated how those calculations are affected by the grain size distribution. Bidirectional reflectance spectra, obtained with incidence and emission angles of 30° and 0°, respectively, were measured on powders of different grain sizes for four different glassy compositions in the VNIR. Hapke's model of the interaction of light with particulate surfaces was used to determine the imaginary index, k, at each wavelength by iteratively minimizing the difference between measured and calculated reflectance The basic approach to retrieving the optical constants was to use multiple grain sizes of the same sample and assume all grain sizes are compositionally equivalent. Unless independently known as a function of wavelength, an additional assumption must be made regarding the real index of refraction, n. The median size for each particle size separate was adopted for initially estimating k. Then, iterating the Hapke analysis results with a subtractive Kramers-Kronig analysis we were able to determine the wavelength dependence of n. For each composition we used the k-values estimated for all the grain sizes to calculate a mean k-value representing that composition. These values were then used to fit the original spectra by only varying the grain sizes. As a separate estimate of the k-values, we will use transmission measurements in the VNIR. Two slabs, with different thicknesses, will be measured for each composition. These data will be used to determine a k value and a comparison between k values obtained from the two different techniques will be discussed.

  6. SU-D-201-05: Phantom Study to Determine Optimal PET Reconstruction Parameters for PET/MR Imaging of Y-90 Microspheres Following Radioembolization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maughan, N; Conti, M; Parikh, P

    2015-06-15

    Purpose: Imaging Y-90 microspheres with PET/MRI following hepatic radioembolization has the potential for predicting treatment outcome and, in turn, improving patient care. The positron decay branching ratio, however, is very small (32 ppm), yielding images with poor statistics even when therapy doses are used. Our purpose is to find PET reconstruction parameters that maximize the PET recovery coefficients and minimize noise. Methods: An initial 7.5 GBq of Y-90 chloride solution was used to fill an ACR phantom for measurements with a PET/MRI scanner (Siemens Biograph mMR). Four hot cylinders and a warm background activity volume of the phantom were filledmore » with a 10:1 ratio. Phantom attenuation maps were derived from scaled CT images of the phantom and included the MR phased array coil. The phantom was imaged at six time points between 7.5–1.0 GBq total activity over a period of eight days. PET images were reconstructed via OP-OSEM with 21 subsets and varying iteration number (1–5), post-reconstruction filter size (5–10 mm), and either absolute or relative scatter correction. Recovery coefficients, SNR, and noise were measured as well as total activity in the phantom. Results: For the 120 different reconstructions, recovery coefficients ranged from 0.1–0.6 and improved with increasing iteration number and reduced post-reconstruction filter size. SNR, however, improved substantially with lower iteration numbers and larger post-reconstruction filters. From the phantom data, we found that performing 2 iterations, 21 subsets, and applying a 5 mm Gaussian post-reconstruction filter provided optimal recovery coefficients at a moderate noise level for a wide range of activity levels. Conclusion: The choice of reconstruction parameters for Y-90 PET images greatly influences both the accuracy of measurements and image quality. We have found reconstruction parameters that provide optimal recovery coefficients with minimized noise. Future work will include the effects of the body matrix coil and off-center measurements.« less

  7. Fine-resolution conservation planning with limited climate-change information.

    PubMed

    Shah, Payal; Mallory, Mindy L; Ando, Amy W; Guntenspergen, Glenn R

    2017-04-01

    Climate-change induced uncertainties in future spatial patterns of conservation-related outcomes make it difficult to implement standard conservation-planning paradigms. A recent study translates Markowitz's risk-diversification strategy from finance to conservation settings, enabling conservation agents to use this diversification strategy for allocating conservation and restoration investments across space to minimize the risk associated with such uncertainty. However, this method is information intensive and requires a large number of forecasts of ecological outcomes associated with possible climate-change scenarios for carrying out fine-resolution conservation planning. We developed a technique for iterative, spatial portfolio analysis that can be used to allocate scarce conservation resources across a desired level of subregions in a planning landscape in the absence of a sufficient number of ecological forecasts. We applied our technique to the Prairie Pothole Region in central North America. A lack of sufficient future climate information prevented attainment of the most efficient risk-return conservation outcomes in the Prairie Pothole Region. The difference in expected conservation returns between conservation planning with limited climate-change information and full climate-change information was as large as 30% for the Prairie Pothole Region even when the most efficient iterative approach was used. However, our iterative approach allowed finer resolution portfolio allocation with limited climate-change forecasts such that the best possible risk-return combinations were obtained. With our most efficient iterative approach, the expected loss in conservation outcomes owing to limited climate-change information could be reduced by 17% relative to other iterative approaches. © 2016 Society for Conservation Biology.

  8. Eigenvalue Solvers for Modeling Nuclear Reactors on Leadership Class Machines

    DOE PAGES

    Slaybaugh, R. N.; Ramirez-Zweiger, M.; Pandya, Tara; ...

    2018-02-20

    In this paper, three complementary methods have been implemented in the code Denovo that accelerate neutral particle transport calculations with methods that use leadership-class computers fully and effectively: a multigroup block (MG) Krylov solver, a Rayleigh quotient iteration (RQI) eigenvalue solver, and a multigrid in energy (MGE) preconditioner. The MG Krylov solver converges more quickly than Gauss Seidel and enables energy decomposition such that Denovo can scale to hundreds of thousands of cores. RQI should converge in fewer iterations than power iteration (PI) for large and challenging problems. RQI creates shifted systems that would not be tractable without the MGmore » Krylov solver. It also creates ill-conditioned matrices. The MGE preconditioner reduces iteration count significantly when used with RQI and takes advantage of the new energy decomposition such that it can scale efficiently. Each individual method has been described before, but this is the first time they have been demonstrated to work together effectively. The combination of solvers enables the RQI eigenvalue solver to work better than the other available solvers for large reactors problems on leadership-class machines. Using these methods together, RQI converged in fewer iterations and in less time than PI for a full pressurized water reactor core. These solvers also performed better than an Arnoldi eigenvalue solver for a reactor benchmark problem when energy decomposition is needed. The MG Krylov, MGE preconditioner, and RQI solver combination also scales well in energy. Finally, this solver set is a strong choice for very large and challenging problems.« less

  9. Eigenvalue Solvers for Modeling Nuclear Reactors on Leadership Class Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaybaugh, R. N.; Ramirez-Zweiger, M.; Pandya, Tara

    In this paper, three complementary methods have been implemented in the code Denovo that accelerate neutral particle transport calculations with methods that use leadership-class computers fully and effectively: a multigroup block (MG) Krylov solver, a Rayleigh quotient iteration (RQI) eigenvalue solver, and a multigrid in energy (MGE) preconditioner. The MG Krylov solver converges more quickly than Gauss Seidel and enables energy decomposition such that Denovo can scale to hundreds of thousands of cores. RQI should converge in fewer iterations than power iteration (PI) for large and challenging problems. RQI creates shifted systems that would not be tractable without the MGmore » Krylov solver. It also creates ill-conditioned matrices. The MGE preconditioner reduces iteration count significantly when used with RQI and takes advantage of the new energy decomposition such that it can scale efficiently. Each individual method has been described before, but this is the first time they have been demonstrated to work together effectively. The combination of solvers enables the RQI eigenvalue solver to work better than the other available solvers for large reactors problems on leadership-class machines. Using these methods together, RQI converged in fewer iterations and in less time than PI for a full pressurized water reactor core. These solvers also performed better than an Arnoldi eigenvalue solver for a reactor benchmark problem when energy decomposition is needed. The MG Krylov, MGE preconditioner, and RQI solver combination also scales well in energy. Finally, this solver set is a strong choice for very large and challenging problems.« less

  10. Full orbit computations of ripple-induced fusion {alpha}-particle losses from burning tokamak plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClements, K.G.

    A full orbit code is used to compute collisionless losses of fusion {alpha} particles from three proposed burning plasma tokamaks: the International Tokamak Experimental Reactor (ITER); a spherical tokamak power plant (STPP) [T. C. Hender, A. Bond, J. Edwards, P. J. Karditsas, K. G. McClements, J. Mustoe, D. V. Sherwood, G. M. Voss, and H. R. Wilson, Fusion Eng. Des. 48, 255 (2000)]; and a spherical tokamak components test facility (CTF) [H. R. Wilson, G. M. Voss, R. J. Akers, L. Appel, A. Dnestrovskij, O. Keating, T. C. Hender, M. J. Hole, G. Huysmans, A. Kirk, P. J. Knight, M.more » Loughlin, K. G. McClements, M. R. O'Brien, and D. Yu. Sychugov, Proceedings of the 20th IAEA Fusion Energy Conference, Invited Paper FT/3-1Ra]. It has been suggested that {alpha} particle transport could be enhanced due to cyclotron resonance with the toroidal magnetic field ripple. However, calculations for inductive operation in ITER yield a loss rate that appears to be broadly consistent with the predictions of guiding center theory, falling monotonically as the number of toroidal field coils N is increased (and hence the ripple amplitude is decreased). For STPP and CTF the loss rate does not decrease monotonically with N, but collisionless losses are generally low in absolute terms. As in the case of ITER, there is no evidence that finite Larmor radius effects would seriously degrade fusion {alpha}-particle confinement.« less

  11. Sizing of complex structure by the integration of several different optimal design algorithms

    NASA Technical Reports Server (NTRS)

    Sobieszczanski, J.

    1974-01-01

    Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.

  12. Improving waveform inversion using modified interferometric imaging condition

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2017-12-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  13. Improving waveform inversion using modified interferometric imaging condition

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2018-02-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  14. Swarm size and iteration number effects to the performance of PSO algorithm in RFID tag coverage optimization

    NASA Astrophysics Data System (ADS)

    Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah

    2017-04-01

    Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.

  15. In vitro evaluation of a new iterative reconstruction algorithm for dose reduction in coronary artery calcium scoring

    PubMed Central

    Allmendinger, Thomas; Kunz, Andreas S; Veyhl-Wichmann, Maike; Ergün, Süleyman; Bley, Thorsten A; Petritsch, Bernhard

    2017-01-01

    Background Coronary artery calcium (CAC) scoring is a widespread tool for cardiac risk assessment in asymptomatic patients and accompanying possible adverse effects, i.e. radiation exposure, should be as low as reasonably achievable. Purpose To evaluate a new iterative reconstruction (IR) algorithm for dose reduction of in vitro coronary artery calcium scoring at different tube currents. Material and Methods An anthropomorphic calcium scoring phantom was scanned in different configurations simulating slim, average-sized, and large patients. A standard calcium scoring protocol was performed on a third-generation dual-source CT at 120 kVp tube voltage. Reference tube current was 80 mAs as standard and stepwise reduced to 60, 40, 20, and 10 mAs. Images were reconstructed with weighted filtered back projection (wFBP) and a new version of an established IR kernel at different strength levels. Calcifications were quantified calculating Agatston and volume scores. Subjective image quality was visualized with scans of an ex vivo human heart. Results In general, Agatston and volume scores remained relatively stable between 80 and 40 mAs and increased at lower tube currents, particularly in the medium and large phantom. IR reduced this effect, as both Agatston and volume scores decreased with increasing levels of IR compared to wFBP (P < 0.001). Depending on selected parameters, radiation dose could be lowered by up to 86% in the large size phantom when selecting a reference tube current of 10 mAs with resulting Agatston levels close to the reference settings. Conclusion New iterative reconstruction kernels may allow for reduction in tube current for established Agatston scoring protocols and consequently for substantial reduction in radiation exposure. PMID:28607763

  16. Self-consistent field for fragmented quantum mechanical model of large molecular systems.

    PubMed

    Jin, Yingdi; Su, Neil Qiang; Xu, Xin; Hu, Hao

    2016-01-30

    Fragment-based linear scaling quantum chemistry methods are a promising tool for the accurate simulation of chemical and biomolecular systems. Because of the coupled inter-fragment electrostatic interactions, a dual-layer iterative scheme is often employed to compute the fragment electronic structure and the total energy. In the dual-layer scheme, the self-consistent field (SCF) of the electronic structure of a fragment must be solved first, then followed by the updating of the inter-fragment electrostatic interactions. The two steps are sequentially carried out and repeated; as such a significant total number of fragment SCF iterations is required to converge the total energy and becomes the computational bottleneck in many fragment quantum chemistry methods. To reduce the number of fragment SCF iterations and speed up the convergence of the total energy, we develop here a new SCF scheme in which the inter-fragment interactions can be updated concurrently without converging the fragment electronic structure. By constructing the global, block-wise Fock matrix and density matrix, we prove that the commutation between the two global matrices guarantees the commutation of the corresponding matrices in each fragment. Therefore, many highly efficient numerical techniques such as the direct inversion of the iterative subspace method can be employed to converge simultaneously the electronic structure of all fragments, reducing significantly the computational cost. Numerical examples for water clusters of different sizes suggest that the method shall be very useful in improving the scalability of fragment quantum chemistry methods. © 2015 Wiley Periodicals, Inc.

  17. Solving large test-day models by iteration on data and preconditioned conjugate gradient.

    PubMed

    Lidauer, M; Strandén, I; Mäntysaari, E A; Pösö, J; Kettunen, A

    1999-12-01

    A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied. An algorithm was used as a reference in which one fixed effect was solved by Gauss-Seidel method, and other effects were solved by a second-order Jacobi method. Implementation of the preconditioned conjugate gradient required storing four vectors (size equal to number of unknowns in the mixed model equations) in random access memory and reading the data at each round of iteration. The preconditioner comprised diagonal blocks of the coefficient matrix. Comparison of algorithms was based on solutions of mixed model equations obtained by a single-trait animal model and a single-trait, random regression test-day model. Data sets for both models used milk yield records of primiparous Finnish dairy cows. Animal model data comprised 665,629 lactation milk yields and random regression test-day model data of 6,732,765 test-day milk yields. Both models included pedigree information of 1,099,622 animals. The animal model ¿random regression test-day model¿ required 122 ¿305¿ rounds of iteration to converge with the reference algorithm, but only 88 ¿149¿ were required with the preconditioned conjugate gradient. To solve the random regression test-day model with the preconditioned conjugate gradient required 237 megabytes of random access memory and took 14% of the computation time needed by the reference algorithm.

  18. ELM mitigation with pellet ELM triggering and implications for PFCs and plasma performance in ITER

    DOE PAGES

    Baylor, Larry R.; Lang, P. T.; Allen, Steve L.; ...

    2014-10-05

    The triggering of rapid small edge localized modes (ELMs) by high frequency pellet injection has been proposed as a method to prevent large naturally occurring ELMs that can erode the ITER plasma facing components. Deuterium pellet injection has been used to successfully demonstrate the on-demand triggering of edge localized modes (ELMs) at much higher rates and with much smaller intensity than natural ELMs. The proposed hypothesis for the triggering mechanism of ELMs by pellets is the local pressure perturbation resulting from reheating of the pellet cloud that can exceed the local high-n ballooning mode threshold where the pellet is injected.more » Nonlinear MHD simulations of the pellet ELM triggering show destabilization of high-n ballooning modes by such a local pressure perturbation. A review of the recent pellet ELM triggering results from ASDEX Upgrade (AUG), DIII-D, and JET reveals that a number of uncertainties about this ELM mitigation technique still remain. These include the heat flux impact pattern on the divertor and wall from pellet triggered and natural ELMs, the necessary pellet size and injection location to reliably trigger ELMs, and the level of fueling to be expected from ELM triggering pellets and synergy with larger fueling pellets. The implications of these issues for pellet ELM mitigation in ITER and its impact on the PFCs are presented along with the design features of the pellet injection system for ITER.« less

  19. Iterative h-minima-based marker-controlled watershed for cell nucleus segmentation.

    PubMed

    Koyuncu, Can Fahrettin; Akhan, Ece; Ersahin, Tulin; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem

    2016-04-01

    Automated microscopy imaging systems facilitate high-throughput screening in molecular cellular biology research. The first step of these systems is cell nucleus segmentation, which has a great impact on the success of the overall system. The marker-controlled watershed is a technique commonly used by the previous studies for nucleus segmentation. These studies define their markers finding regional minima on the intensity/gradient and/or distance transform maps. They typically use the h-minima transform beforehand to suppress noise on these maps. The selection of the h value is critical; unnecessarily small values do not sufficiently suppress the noise, resulting in false and oversegmented markers, and unnecessarily large ones suppress too many pixels, causing missing and undersegmented markers. Because cell nuclei show different characteristics within an image, the same h value may not work to define correct markers for all the nuclei. To address this issue, in this work, we propose a new watershed algorithm that iteratively identifies its markers, considering a set of different h values. In each iteration, the proposed algorithm defines a set of candidates using a particular h value and selects the markers from those candidates provided that they fulfill the size requirement. Working with widefield fluorescence microscopy images, our experiments reveal that the use of multiple h values in our iterative algorithm leads to better segmentation results, compared to its counterparts. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  20. ELM mitigation with pellet ELM triggering and implications for PFCs and plasma performance in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baylor, Larry R.; Lang, P.; Allen, S. L.

    2015-08-01

    The triggering of rapid small edge localized modes (ELMs) by high frequency pellet injection has been proposed as a method to prevent large naturally occurring ELMs that can erode the ITER plasma facing components (PFCs). Deuterium pellet injection has been used to successfully demonstrate the on-demand triggering of edge localized modes (ELMs) at much higher rates and with much smaller intensity than natural ELMs. The proposed hypothesis for the triggering mechanism of ELMs by pellets is the local pressure perturbation resulting from reheating of the pellet cloud that can exceed the local high-n ballooning mode threshold where the pellet ismore » injected. Nonlinear MHD simulations of the pellet ELM triggering show destabilization of high-n ballooning modes by such a local pressure perturbation.A review of the recent pellet ELM triggering results from ASDEX Upgrade (AUG), DIII-D, and JET reveals that a number of uncertainties about this ELM mitigation technique still remain. These include the heat flux impact pattern on the divertor and wall from pellet triggered and natural ELMs, the necessary pellet size and injection location to reliably trigger ELMs, and the level of fueling to be expected from ELM triggering pellets and synergy with larger fueling pellets. The implications of these issues for pellet ELM mitigation in ITER and its impact on the PFCs are presented along with the design features of the pellet injection system for ITER.« less

  1. A Fourier dimensionality reduction model for big data interferometric imaging

    NASA Astrophysics Data System (ADS)

    Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves

    2017-06-01

    Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the proposed reduction method is available on GitHub.

  2. Assessment of acquisition protocols for routine imaging of Y-90 using PET/CT

    PubMed Central

    2013-01-01

    Background Despite the early theoretical prediction of the 0+-0+ transition of 90Zr, 90Y-PET underwent only recently a growing interest for the development of imaging radioembolization of liver tumors. The aim of this work was to determine the minimum detectable activity (MDA) of 90Y by PET imaging and the impact of time-of-flight (TOF) reconstruction on detectability and quantitative accuracy according to the lesion size. Methods The study was conducted using a Siemens Biograph® mCT with a 22 cm large axial field of view. An IEC torso-shaped phantom containing five coplanar spheres was uniformly filled to achieve sphere-to-background ratios of 40:1. The phantom was imaged nine times in 14 days over 30 min. Sinograms were reconstructed with and without TOF information. A contrast-to-noise ratio (CNR) index was calculated using the Rose criterion, taking partial volume effects into account. The impact of reconstruction parameters on quantification accuracy, detectability, and spatial localization of the signal was investigated. Finally, six patients with hepatocellular carcinoma and four patients included in different 90Y-based radioimmunotherapy protocols were enrolled for the evaluation of the imaging parameters in a clinical situation. Results The highest CNR was achieved with one iteration for both TOF and non-TOF reconstructions. The MDA, however, was found to be lower with TOF than with non-TOF reconstruction. There was no gain by adding TOF information in terms of CNR for concentrations higher than 2 to 3 MBq mL−1, except for infra-centimetric lesions. Recovered activity was highly underestimated when a single iteration or non-TOF reconstruction was used (10% to 150% less depending on the lesion size). The MDA was estimated at 1 MBq mL−1 for a TOF reconstruction and infra-centimetric lesions. Images from patients treated with microspheres were clinically relevant, unlike those of patients who received systemic injections of 90Y. Conclusions Only one iteration and TOF were necessary to achieve an MDA around 1 MBq mL−1 and the most accurate localization of lesions. For precise quantification, at least three iterations gave the best performance, using TOF reconstruction and keeping an MDA of roughly 1 MBq mL−1. One and three iterations were mandatory to prevent false positive results for quantitative analysis of clinical data. Trial registration http://IDRCB 2011-A00043-38 P101103 PMID:23414629

  3. Deuterium results at the negative ion source test facility ELISE

    NASA Astrophysics Data System (ADS)

    Kraus, W.; Wünderlich, D.; Fantz, U.; Heinemann, B.; Bonomo, F.; Riedl, R.

    2018-05-01

    The ITER neutral beam system will be equipped with large radio frequency (RF) driven negative ion sources, with a cross section of 0.9 m × 1.9 m, which have to deliver extracted D- ion beams of 57 A at 1 MeV for 1 h. On the extraction from a large ion source experiment test facility, a source of half of this size is being operational since 2013. The goal of this experiment is to demonstrate a high operational reliability and to achieve the extracted current densities and beam properties required for ITER. Technical improvements of the source design and the RF system were necessary to provide reliable operation in steady state with an RF power of up to 300 kW. While in short pulses the required D- current density has almost been reached, the performance in long pulses is determined in particular in Deuterium by inhomogeneous and unstable currents of co-extracted electrons. By application of refined caesium evaporation and distribution procedures, and reduction and symmetrization of the electron currents, considerable progress has been made and up to 190 A/m2 D-, corresponding to 66% of the value required for ITER, have been extracted for 45 min.

  4. Iterative dip-steering median filter

    NASA Astrophysics Data System (ADS)

    Huo, Shoudong; Zhu, Weihong; Shi, Taikun

    2017-09-01

    Seismic data are always contaminated with high noise components, which present processing challenges especially for signal preservation and its true amplitude response. This paper deals with an extension of the conventional median filter, which is widely used in random noise attenuation. It is known that the standard median filter works well with laterally aligned coherent events but cannot handle steep events, especially events with conflicting dips. In this paper, an iterative dip-steering median filter is proposed for the attenuation of random noise in the presence of multiple dips. The filter first identifies the dominant dips inside an optimized processing window by a Fourier-radial transform in the frequency-wavenumber domain. The optimum size of the processing window depends on the intensity of random noise that needs to be attenuated and the amount of signal to be preserved. It then applies median filter along the dominant dip and retains the signals. Iterations are adopted to process the residual signals along the remaining dominant dips in a descending sequence, until all signals have been retained. The method is tested by both synthetic and field data gathers and also compared with the commonly used f-k least squares de-noising and f-x deconvolution.

  5. TH-AB-BRA-09: Stability Analysis of a Novel Dose Calculation Algorithm for MRI Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zelyak, O; Fallone, B; Cross Cancer Institute, Edmonton, AB

    2016-06-15

    Purpose: To determine the iterative deterministic solution stability of the Linear Boltzmann Transport Equation (LBTE) in the presence of magnetic fields. Methods: The LBTE with magnetic fields under investigation is derived using a discrete ordinates approach. The stability analysis is performed using analytical and numerical methods. Analytically, the spectral Fourier analysis is used to obtain the convergence rate of the source iteration procedures based on finding the largest eigenvalue of the iterative operator. This eigenvalue is a function of relevant physical parameters, such as magnetic field strength and material properties, and provides essential information about the domain of applicability requiredmore » for clinically optimal parameter selection and maximum speed of convergence. The analytical results are reinforced by numerical simulations performed using the same discrete ordinates method in angle, and a discontinuous finite element spatial approach. Results: The spectral radius for the source iteration technique of the time independent transport equation with isotropic and anisotropic scattering centers inside infinite 3D medium is equal to the ratio of differential and total cross sections. The result is confirmed numerically by solving LBTE and is in full agreement with previously published results. The addition of magnetic field reveals that the convergence becomes dependent on the strength of magnetic field, the energy group discretization, and the order of anisotropic expansion. Conclusion: The source iteration technique for solving the LBTE with magnetic fields with the discrete ordinates method leads to divergent solutions in the limiting cases of small energy discretizations and high magnetic field strengths. Future investigations into non-stationary Krylov subspace techniques as an iterative solver will be performed as this has been shown to produce greater stability than source iteration. Furthermore, a stability analysis of a discontinuous finite element space-angle approach (which has been shown to provide the greatest stability) will also be investigated. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less

  6. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less

  7. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.

  8. Measurement of particle size distribution in mammalian cells in vitro by use of polarized light spectroscopy

    NASA Astrophysics Data System (ADS)

    Bartlett, Matthew; Huang, George; Larcom, Lyndon; Jiang, Huabei

    2004-02-01

    We demonstrate the feasibility of measuring the particle size distribution (PSD) of internal cell structures in vitro. We use polarized light spectroscopy to probe the internal morphology of mammalian breast cancer (MCF7) and cervical cancer (Siha) cells. We find that graphing the least-squared error versus the scatterer size provides insight into cell scattering. A nonlinear optimization scheme is used to determine the PSD iteratively. The results suggest that 2-μm particles (possibly the mitochondria) contribute most to the scattering. Other subcellular structures, such as the nucleoli and the nucleus, may also contribute significantly. We reconstruct the PSD of the mitochondria, as verified by optical microscopy. We also demonstrate the angle dependence of the PSD.

  9. Sample size requirements for the design of reliability studies: precision consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.

  10. Parallel fast multipole boundary element method applied to computational homogenization

    NASA Astrophysics Data System (ADS)

    Ptaszny, Jacek

    2018-01-01

    In the present work, a fast multipole boundary element method (FMBEM) and a parallel computer code for 3D elasticity problem is developed and applied to the computational homogenization of a solid containing spherical voids. The system of equation is solved by using the GMRES iterative solver. The boundary of the body is dicretized by using the quadrilateral serendipity elements with an adaptive numerical integration. Operations related to a single GMRES iteration, performed by traversing the corresponding tree structure upwards and downwards, are parallelized by using the OpenMP standard. The assignment of tasks to threads is based on the assumption that the tree nodes at which the moment transformations are initialized can be partitioned into disjoint sets of equal or approximately equal size and assigned to the threads. The achieved speedup as a function of number of threads is examined.

  11. Macromolecular Crystallization in Microfluidics for the International Space Station

    NASA Technical Reports Server (NTRS)

    Monaco, Lisa A.; Spearing, Scott

    2003-01-01

    At NASA's Marshall Space Flight Center, the Iterative Biological Crystallization (IBC) project has begun development on scientific hardware for macromolecular crystallization on the International Space Station (ISS). Currently ISS crystallization research is limited to solution recipes that were prepared on the ground prior to launch. The proposed hardware will conduct solution mixing and dispensing on board the ISS, be fully automated, and have imaging functions via remote commanding from the ground. Utilizing microfluidic technology, IBC will allow for on orbit iterations. The microfluidics LabChip(R) devices that have been developed, along with Caliper Technologies, will greatly benefit researchers by allowing for precise fluid handling of nano/pico liter sized volumes. IBC will maximize the amount of science return by utilizing the microfluidic approach and be a valuable tool to structural biologists investigating medically relevant projects.

  12. Numerical simulations of microwave heating of liquids: enhancements using Krylov subspace methods

    NASA Astrophysics Data System (ADS)

    Lollchund, M. R.; Dookhitram, K.; Sunhaloo, M. S.; Boojhawon, R.

    2013-04-01

    In this paper, we compare the performances of three iterative solvers for large sparse linear systems arising in the numerical computations of incompressible Navier-Stokes (NS) equations. These equations are employed mainly in the simulation of microwave heating of liquids. The emphasis of this work is on the application of Krylov projection techniques such as Generalized Minimal Residual (GMRES) to solve the Pressure Poisson Equations that result from discretisation of the NS equations. The performance of the GMRES method is compared with the traditional Gauss-Seidel (GS) and point successive over relaxation (PSOR) techniques through their application to simulate the dynamics of water housed inside a vertical cylindrical vessel which is subjected to microwave radiation. It is found that as the mesh size increases, GMRES gives the fastest convergence rate in terms of computational times and number of iterations.

  13. Optimization of the spatial resolution for the GE discovery PET/CT 710 by using NEMA NU 2-2007 standards

    NASA Astrophysics Data System (ADS)

    Yoon, Hyun Jin; Jeong, Young Jin; Son, Hye Joo; Kang, Do-Young; Hyun, Kyung-Yae; Lee, Min-Kyung

    2015-01-01

    The spatial resolution in positron emission tomography (PET) is fundamentally limited by the geometry of the detector element, the positron's recombination range with electrons, the acollinearity of the positron, the crystal decoding error, the penetration into the detector ring, and the reconstruction algorithms. In this paper, optimized parameters are suggested to produce high-resolution PET images by using an iterative reconstruction algorithm. A phantom with three point sources structured with three capillary tubes was prepared with an axial extension of less than 1 mm and was filled with 18F-fluorodeoxyglucose (18F-FDG) with concentrations above 200 MBq/cc. The performance measures of all the PET images were acquired according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standards procedures. The parameters for the iterative reconstruction were adjusted around the values recommended by General Electric GE, and the optimized values of the spatial resolution and the full width at half maximum (FWHM) or the full width at tenth of maximum (FWTM) values were found for the best PET resolution. The axial and the transverse spatial resolutions, according to the filtered back-projection (FBP) at 1 cm off-axis, were 4.81 and 4.48 mm, respectively. The axial and the transaxial spatial resolutions at 10 cm off-axis were 5.63 mm and 5.08 mm, respectively, and the trans-axial resolution at 10 cm was evaluated as the average of the radial and the tangential measurements. The recommended optimized parameters of the spatial resolution according to the NEMA phantom for the number of subsets, the number of iterations, and the Gaussian post-filter are 12, 3, and 3 mm for the iterative reconstruction VUE Point HD without the SharpIR algorithm (HD), and 12, 12, and 5.2 mm with SharpIR (HD.S), respectively, according to the Advantage Workstation Volume Share 5 (AW4.6). The performance measurements for the GE Discovery PET/CT 710 using the NEMA NU 2-2007 standards from our results will be helpful in the quantitative analysis of PET scanner images. The spatial resolution was modified more by using an improved algorithm such as HD.S, than by using HD and FBP. The use of the optimized parameters for iterative reconstructions is strongly recommended for qualitative images from the GE Discovery PET/CT 710 scanner.

  14. Investigation of key parameters for the development of reliable ITER baseline operation scenarios using CORSICA

    NASA Astrophysics Data System (ADS)

    Kim, S. H.; Casper, T. A.; Snipes, J. A.

    2018-05-01

    ITER will demonstrate the feasibility of burning plasma operation by operating DT plasmas in the ELMy H-mode regime with a high ratio of fusion power gain Q ~ 10. 15 MA ITER baseline operation scenario has been studied using CORSICA, focusing on the entry to burn, flat-top burning plasma operation and exit from burn. The burning plasma operation for about 400 s of the current flat-top was achieved in H-mode within the various engineering constraints imposed by the poloidal field coil and power supply systems. The target fusion gain (Q ~ 10) was achievable in the 15 MA ITER baseline operation with a moderate amount of the total auxiliary heating power (~50 MW). It has been observed that the tungsten (W) concentration needs to be maintained low level (n w/n e up to the order of 1.0  ×  10-5) to avoid the radiative collapse and uncontrolled early termination of the discharge. The dynamic evolution of the density can modify the H-mode access unless the applied auxiliary heating power is significantly higher than the H-mode threshold power. Several qualitative sensitivity studies have been performed to provide guidance for further optimizing the plasma operation and performance. Increasing the density profile peaking factor was quite effective in increasing the alpha particle self-heating power and fusion power multiplication factor. Varying the combination of auxiliary heating power has shown that the fusion power multiplication factor can be reduced along with the increase in the total auxiliary heating power. As the 15 MA ITER baseline operation scenario requires full capacity of the coil and power supply systems, the operation window for H-mode access and shape modification was narrow. The updated ITER baseline operation scenarios developed in this work will become a basis for further optimization studies necessary along with the improvement in understanding the burning plasma physics.

  15. Pivotal issues on relativistic electrons in ITER

    NASA Astrophysics Data System (ADS)

    Boozer, Allen H.

    2018-03-01

    The transfer of the plasma current from thermal to relativistic electrons is a threat to ITER achieving its mission. This danger is significantly greater in the nuclear than in the non-nuclear phase of ITER operations. Two issues are pivotal. The first is the extent and duration of magnetic surface breaking in conjunction with the thermal quenches. The second is the exponential sensitivity of the current transfer to three quantities: (1) the poloidal flux change required to e-fold the number of relativistic electrons, (2) the time τa after the beginning of the thermal quench before the accelerating electric field exceeds the Connor-Hastie field for runaway, and (3) the duration of the period τ_op in which magnetic surfaces remain open. Adequate knowledge does not exist to devise a reliable strategy for the protection of ITER. Uncertainties are sufficiently large that a transfer of neither a negligible nor the full plasma current to relativistic electrons can be ruled out during the non-nuclear phase of ITER. Tritium decay can provide a sufficiently strong seed for a dangerous relativistic-electron current even if τa and τ_op are sufficiently long to avoid relativistic electrons during non-nuclear operations. The breakup of magnetic surfaces that is associated with thermal quenches occurs on a time scale associated with fast magnetic reconnection, which means reconnection at an Alfvénic rather than a resistive rate. Alfvénic reconnection is well beyond the capabilities of existing computational tools for tokamaks, but its effects can be studied using its property of conserving magnetic helicity. Although the dangers to ITER from relativistic electrons have been known for twenty years, the critical issues have not been defined with sufficient precision to formulate an effective research program. Studies are particularly needed on plasma behavior in existing tokamaks during thermal quenches, behavior which could be clarified using methods developed here.

  16. Strain-Annealing Based Grain Boundary Engineering to Evaluate its Sole Implication on Intergranular Corrosion in Extra-Low Carbon Type 304L Austenitic Stainless Steel

    NASA Astrophysics Data System (ADS)

    Pradhan, S. K.; Bhuyan, P.; Kaithwas, C.; Mandal, Sumantra

    2018-05-01

    Strain-annealing based thermo-mechanical processing has been performed to promote grain boundary engineering (GBE) in an extra-low carbon type austenitic stainless steel without altering the grain size and residual strain to evaluate its sole influence on intergranular corrosion. Single-step processing comprising low pre-strain ( 5 and 10 pct) followed by annealing at 1273 K for 1 hour have resulted in a large fraction of Σ3 n boundaries and significant disruption in random high-angle grain boundaries (RHAGBs) connectivity. This is due to the occurrence of prolific multiple twinning in these specimens as confirmed by their large twin-related domain and twin-related grain size ratio. Among the iterative processing, the schedule comprising two cycles of 10 and 5 pct deformation followed by annealing at 1173 K for 1 hour has yielded the optimum GBE microstructure with the grain size and residual strain akin to the as-received condition. The specimens subjected to the higher number of iterations failed to realize GBE microstructures due to the occurrence of partial recrystallization. Owing to the optimum grain boundary character distribution, the GBE specimen has exhibited remarkable resistance against sensitization and intergranular corrosion as compared to the as-received condition. Furthermore, the lower depth of percolation in the GBE specimen is due to the significant disruption of RHAGBs connectivity as confirmed from its large twin-related domain and lower fractal dimension.

  17. Strain-Annealing Based Grain Boundary Engineering to Evaluate its Sole Implication on Intergranular Corrosion in Extra-Low Carbon Type 304L Austenitic Stainless Steel

    NASA Astrophysics Data System (ADS)

    Pradhan, S. K.; Bhuyan, P.; Kaithwas, C.; Mandal, Sumantra

    2018-07-01

    Strain-annealing based thermo-mechanical processing has been performed to promote grain boundary engineering (GBE) in an extra-low carbon type austenitic stainless steel without altering the grain size and residual strain to evaluate its sole influence on intergranular corrosion. Single-step processing comprising low pre-strain ( 5 and 10 pct) followed by annealing at 1273 K for 1 hour have resulted in a large fraction of Σ3 n boundaries and significant disruption in random high-angle grain boundaries (RHAGBs) connectivity. This is due to the occurrence of prolific multiple twinning in these specimens as confirmed by their large twin-related domain and twin-related grain size ratio. Among the iterative processing, the schedule comprising two cycles of 10 and 5 pct deformation followed by annealing at 1173 K for 1 hour has yielded the optimum GBE microstructure with the grain size and residual strain akin to the as-received condition. The specimens subjected to the higher number of iterations failed to realize GBE microstructures due to the occurrence of partial recrystallization. Owing to the optimum grain boundary character distribution, the GBE specimen has exhibited remarkable resistance against sensitization and intergranular corrosion as compared to the as-received condition. Furthermore, the lower depth of percolation in the GBE specimen is due to the significant disruption of RHAGBs connectivity as confirmed from its large twin-related domain and lower fractal dimension.

  18. Matching pursuit parallel decomposition of seismic data

    NASA Astrophysics Data System (ADS)

    Li, Chuanhui; Zhang, Fanchang

    2017-07-01

    In order to improve the computation speed of matching pursuit decomposition of seismic data, a matching pursuit parallel algorithm is designed in this paper. We pick a fixed number of envelope peaks from the current signal in every iteration according to the number of compute nodes and assign them to the compute nodes on average to search the optimal Morlet wavelets in parallel. With the help of parallel computer systems and Message Passing Interface, the parallel algorithm gives full play to the advantages of parallel computing to significantly improve the computation speed of the matching pursuit decomposition and also has good expandability. Besides, searching only one optimal Morlet wavelet by every compute node in every iteration is the most efficient implementation.

  19. Preconditioned conjugate residual methods for the solution of spectral equations

    NASA Technical Reports Server (NTRS)

    Wong, Y. S.; Zang, T. A.; Hussaini, M. Y.

    1986-01-01

    Conjugate residual methods for the solution of spectral equations are described. An inexact finite-difference operator is introduced as a preconditioner in the iterative procedures. Application of these techniques is limited to problems for which the symmetric part of the coefficient matrix is positive definite. Although the spectral equation is a very ill-conditioned and full matrix problem, the computational effort of the present iterative methods for solving such a system is comparable to that for the sparse matrix equations obtained from the application of either finite-difference or finite-element methods to the same problems. Numerical experiments are shown for a self-adjoint elliptic partial differential equation with Dirichlet boundary conditions, and comparison with other solution procedures for spectral equations is presented.

  20. A subtraction scheme for computing QCD jet cross sections at NNLO: integrating the iterated singly-unresolved subtraction terms

    NASA Astrophysics Data System (ADS)

    Bolzoni, Paolo; Somogyi, Gábor; Trócsányi, Zoltán

    2011-01-01

    We perform the integration of all iterated singly-unresolved subtraction terms, as defined in ref. [1], over the two-particle factorized phase space. We also sum over the unresolved parton flavours. The final result can be written as a convolution (in colour space) of the Born cross section and an insertion operator. We spell out the insertion operator in terms of 24 basic integrals that are defined explicitly. We compute the coefficients of the Laurent expansion of these integrals in two different ways, with the method of Mellin-Barnes representations and sector decomposition. Finally, we present the Laurent-expansion of the full insertion operator for the specific examples of electron-positron annihilation into two and three jets.

  1. Rate-Compatible LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  2. Modeling and simulation of a beam emission spectroscopy diagnostic for the ITER prototype neutral beam injector.

    PubMed

    Barbisan, M; Zaniol, B; Pasqualotto, R

    2014-11-01

    A test facility for the development of the neutral beam injection system for ITER is under construction at Consorzio RFX. It will host two experiments: SPIDER, a 100 keV H(-)/D(-) ion RF source, and MITICA, a prototype of the full performance ITER injector (1 MV, 17 MW beam). A set of diagnostics will monitor the operation and allow to optimize the performance of the two prototypes. In particular, beam emission spectroscopy will measure the uniformity and the divergence of the fast particles beam exiting the ion source and travelling through the beam line components. This type of measurement is based on the collection of the Hα/Dα emission resulting from the interaction of the energetic particles with the background gas. A numerical model has been developed to simulate the spectrum of the collected emissions in order to design this diagnostic and to study its performance. The paper describes the model at the base of the simulations and presents the modeled Hα spectra in the case of MITICA experiment.

  3. Model-based iterative learning control of Parkinsonian state in thalamic relay neuron

    NASA Astrophysics Data System (ADS)

    Liu, Chen; Wang, Jiang; Li, Huiyan; Xue, Zhiqin; Deng, Bin; Wei, Xile

    2014-09-01

    Although the beneficial effects of chronic deep brain stimulation on Parkinson's disease motor symptoms are now largely confirmed, the underlying mechanisms behind deep brain stimulation remain unclear and under debate. Hence, the selection of stimulation parameters is full of challenges. Additionally, due to the complexity of neural system, together with omnipresent noises, the accurate model of thalamic relay neuron is unknown. Thus, the iterative learning control of the thalamic relay neuron's Parkinsonian state based on various variables is presented. Combining the iterative learning control with typical proportional-integral control algorithm, a novel and efficient control strategy is proposed, which does not require any particular knowledge on the detailed physiological characteristics of cortico-basal ganglia-thalamocortical loop and can automatically adjust the stimulation parameters. Simulation results demonstrate the feasibility of the proposed control strategy to restore the fidelity of thalamic relay in the Parkinsonian condition. Furthermore, through changing the important parameter—the maximum ionic conductance densities of low-threshold calcium current, the dominant characteristic of the proposed method which is independent of the accurate model can be further verified.

  4. LETTER TO THE EDITOR: Iteratively-coupled propagating exterior complex scaling method for electron hydrogen collisions

    NASA Astrophysics Data System (ADS)

    Bartlett, Philip L.; Stelbovics, Andris T.; Bray, Igor

    2004-02-01

    A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schrödinger equation, for L les 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources.

  5. Nonrigid iterative closest points for registration of 3D biomedical surfaces

    NASA Astrophysics Data System (ADS)

    Liang, Luming; Wei, Mingqiang; Szymczak, Andrzej; Petrella, Anthony; Xie, Haoran; Qin, Jing; Wang, Jun; Wang, Fu Lee

    2018-01-01

    Advanced 3D optical and laser scanners bring new challenges to computer graphics. We present a novel nonrigid surface registration algorithm based on Iterative Closest Point (ICP) method with multiple correspondences. Our method, called the Nonrigid Iterative Closest Points (NICPs), can be applied to surfaces of arbitrary topology. It does not impose any restrictions on the deformation, e.g. rigidity or articulation. Finally, it does not require parametrization of input meshes. Our method is based on an objective function that combines distance and regularization terms. Unlike the standard ICP, the distance term is determined based on multiple two-way correspondences rather than single one-way correspondences between surfaces. A Laplacian-based regularization term is proposed to take full advantage of multiple two-way correspondences. This term regularizes the surface movement by enforcing vertices to move coherently with their 1-ring neighbors. The proposed method achieves good performances when no global pose differences or significant amount of bending exists in the models, for example, families of similar shapes, like human femur and vertebrae models.

  6. Networked iterative learning control design for discrete-time systems with stochastic communication delay in input and output channels

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Ruan, Xiaoe

    2017-07-01

    This paper develops two kinds of derivative-type networked iterative learning control (NILC) schemes for repetitive discrete-time systems with stochastic communication delay occurred in input and output channels and modelled as 0-1 Bernoulli-type stochastic variable. In the two schemes, the delayed signal of the current control input is replaced by the synchronous input utilised at the previous iteration, whilst for the delayed signal of the system output the one scheme substitutes it by the synchronous predetermined desired trajectory and the other takes it by the synchronous output at the previous operation, respectively. In virtue of the mathematical expectation, the tracking performance is analysed which exhibits that for both the linear time-invariant and nonlinear affine systems the two kinds of NILCs are convergent under the assumptions that the probabilities of communication delays are adequately constrained and the product of the input-output coupling matrices is full-column rank. Last, two illustrative examples are presented to demonstrate the effectiveness and validity of the proposed NILC schemes.

  7. Varying-energy CT imaging method based on EM-TV

    NASA Astrophysics Data System (ADS)

    Chen, Ping; Han, Yan

    2016-11-01

    For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.

  8. Electron kinetic effects on interferometry, polarimetry and Thomson scattering measurements in burning plasmas (invited).

    PubMed

    Mirnov, V V; Brower, D L; Den Hartog, D J; Ding, W X; Duff, J; Parke, E

    2014-11-01

    At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = Te/mec(2) model may be insufficient; we present a more precise model with τ(2)-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused by equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of Te measurement relevant to ITER operational scenarios.

  9. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    NASA Astrophysics Data System (ADS)

    Chitarin, G.; Agostinetti, P.; Gallo, A.; Marconato, N.; Nakano, H.; Serianni, G.; Takeiri, Y.; Tsumori, K.

    2011-09-01

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of the BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.

  10. Low activation steels welding with PWHT and coating for ITER test blanket modules and DEMO

    NASA Astrophysics Data System (ADS)

    Aubert, P.; Tavassoli, F.; Rieth, M.; Diegele, E.; Poitevin, Y.

    2011-02-01

    EUROFER weldability is investigated in support of the European material properties database and TBM manufacturing. Electron Beam, Hybrid, laser and narrow gap TIG processes have been carried out on the EUROFER-97 steel (thickness up to 40 mm), a reduced activation ferritic-martensitic steel developed in Europe. These welding processes produce similar welding results with high joint coefficients and are well adapted for minimizing residual distortions. The fusion zones are typically composed of martensite laths, with small grain sizes. In the heat-affected zones, martensite grains contain carbide precipitates. High hardness values are measured in all these zones that if not tempered would degrade toughness and creep resistance. PWHT developments have driven to a one-step PWHT (750 °C/3 h), successfully applied to joints restoring good material performances. It will produce less distortion levels than a full austenitization PWHT process, not really applicable to a complex welded structure such as the TBM. Different tungsten coatings have been successfully processed on EUROFER material. It has shown no really effect on the EUROFER base material microstructure.

  11. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chitarin, G.; University of Padova, Dept. of Management and Engineering, strad. S. Nicola, 36100 Vicenza; Agostinetti, P.

    2011-09-26

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of themore » BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.« less

  12. Advanced Optics for a Full Quasi-Optical Front Steering ECRH Upper Launcher for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moro, A.; Alessi, E.; Bruschi, A.

    2009-11-26

    A full quasi-optical setup for the internal optics of the Front Steering Electron Cyclotron Resonance Heating (ECRH) Upper Launcher for ITER was designed, proving to be feasible and favorable in terms of additional flexibility and cost reduction with respect to the former design. This full quasi-optical solution foresees the replacement of the mitre-bends in the final section of the launcher with dedicated free-space mirrors to realize the last changes of directions in the launcher. A description of the launcher is given and its advantages presented. The parameters of the expected output beams as well as preliminary evaluations of truncation effectsmore » with the physical optics GRASP code are shown. Moreover, a study of mitre-bends replacement with single mirrors for multiple beams is described. In principle it could allow the beams to be larger at the mirror locations (with a further decrease of the peak power density due to partial overlapping) and has the additional advantage to get a larger opening with compressed beams to avoid conflicts with side-walls port. Constraints on the setup, arising both from the resulting beam characteristics in the space of free parameters and from mechanical requirements are taken into account in the analysis.« less

  13. LIDAR TS for ITER core plasma. Part I: layout & hardware

    NASA Astrophysics Data System (ADS)

    Salzmann, H.; Gowers, C.; Nielsen, P.

    2017-12-01

    The original time-of-flight design of the Thomson scattering diagnostic for the ITER core plasma has been shown up by ITER. This decision was justified by insufficiencies of some of the components. In this paper we show that with available, present day technology a LIDAR TS system is feasible which meets all the ITER specifications. As opposed to the conventional TS system the LIDAR TS also measures the high field side of the plasma. The optical layout of the front end has been changed only little in comparison with the latest one considered by ITER. The main change is that it offers an optical collection without any vignetting over the low field side. The throughput of the system is defined only by the size and the angle of acceptance of the detectors. This, in combination with the fact that the LIDAR system uses only one set of spectral channels for the whole line of sight, means that no absolute calibration using Raman or Rayleigh scattering from a non-hydrogen isotope gas fill of the vessel is needed. Alignment of the system is easy since the collection optics view the footprint of the laser on the inner wall. In the described design we use, simultaneously, two different wavelength pulses from a Nd:YAG laser system. Its fundamental wavelength ensures measurements of 2 keV up to more than 40 keV, whereas the injection of the second harmonic enables measurements of low temperatures. As it is the purpose of this paper to show the technological feasibility of the LIDAR system, the hardware is considered in Part I of the paper. In Part II we demonstrate by numerical simulations that the accuracy of the measurements as required by ITER is maintained throughout the given plasma parameter range. The effect of enhanced background radiation in the wavelength range 400 nm-500 nm is considered. In Part III the recovery of calibration in case of changing spectral transmission of the front end is treated. We also investigate how to improve the spatial resolution at the plasma edge.

  14. 3D laser imaging for ODOT interstate network at true 1-mm resolution.

    DOT National Transportation Integrated Search

    2014-12-01

    With the development of 3D laser imaging technology, the latest iteration of : PaveVision3D Ultra can obtain true 1mm resolution 3D data at full-lane coverage in all : three directions at highway speed up to 60MPH. This project provides rapid survey ...

  15. Primal Barrier Methods for Linear Programming

    DTIC Science & Technology

    1989-06-01

    A Theoretical Bound Concerning the difficulties introduced by an ill-conditioned H- 1, Dikin [Dik67] and Stewart [Stew87] show for a full-rank A...Dik67] I. I. Dikin (1967). Iterative solution of problems of linear and quadratic pro- gramming, Doklady Akademii Nauk SSSR, Tom 174, No. 4. [Fia79] A. V

  16. Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method

    NASA Astrophysics Data System (ADS)

    Sun, Yong; Meng, Zhaohai; Li, Fengting

    2018-03-01

    Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.

  17. The indexing ambiguity in serial femtosecond crystallography (SFX) resolved using an expectation maximization algorithm.

    PubMed

    Liu, Haiguang; Spence, John C H

    2014-11-01

    Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these 'stills'. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity), the method is validated.

  18. Full-order optimal compensators for flow control: the multiple inputs case

    NASA Astrophysics Data System (ADS)

    Semeraro, Onofrio; Pralits, Jan O.

    2018-03-01

    Flow control has been the subject of numerous experimental and theoretical works. We analyze full-order, optimal controllers for large dynamical systems in the presence of multiple actuators and sensors. The full-order controllers do not require any preliminary model reduction or low-order approximation: this feature allows us to assess the optimal performance of an actuated flow without relying on any estimation process or further hypothesis on the disturbances. We start from the original technique proposed by Bewley et al. (Meccanica 51(12):2997-3014, 2016. https://doi.org/10.1007/s11012-016-0547-3), the adjoint of the direct-adjoint (ADA) algorithm. The algorithm is iterative and allows bypassing the solution of the algebraic Riccati equation associated with the optimal control problem, typically infeasible for large systems. In this numerical work, we extend the ADA iteration into a more general framework that includes the design of controllers with multiple, coupled inputs and robust controllers (H_{∞} methods). First, we demonstrate our results by showing the analytical equivalence between the full Riccati solutions and the ADA approximations in the multiple inputs case. In the second part of the article, we analyze the performance of the algorithm in terms of convergence of the solution, by comparing it with analogous techniques. We find an excellent scalability with the number of inputs (actuators), making the method a viable way for full-order control design in complex settings. Finally, the applicability of the algorithm to fluid mechanics problems is shown using the linearized Kuramoto-Sivashinsky equation and the Kármán vortex street past a two-dimensional cylinder.

  19. Strategies for efficient resolution analysis in full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Leeuwen, T.; Trampert, J.

    2016-12-01

    Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.

  20. Development of a mirror-based endoscope for divertor spectroscopy on JET with the new ITER-like wall (invited).

    PubMed

    Huber, A; Brezinsek, S; Mertens, Ph; Schweer, B; Sergienko, G; Terra, A; Arnoux, G; Balshaw, N; Clever, M; Edlingdon, T; Egner, S; Farthing, J; Hartl, M; Horton, L; Kampf, D; Klammer, J; Lambertz, H T; Matthews, G F; Morlock, C; Murari, A; Reindl, M; Riccardo, V; Samm, U; Sanders, S; Stamp, M; Williams, J; Zastrow, K D; Zauner, C

    2012-10-01

    A new endoscope with optimised divertor view has been developed in order to survey and monitor the emission of specific impurities such as tungsten and the remaining carbon as well as beryllium in the tungsten divertor of JET after the implementation of the ITER-like wall in 2011. The endoscope is a prototype for testing an ITER relevant design concept based on reflective optics only. It may be subject to high neutron fluxes as expected in ITER. The operating wavelength range, from 390 nm to 2500 nm, allows the measurements of the emission of all expected impurities (W I, Be II, C I, C II, C III) with high optical transmittance (≥ 30% in the designed wavelength range) as well as high spatial resolution that is ≤ 2 mm at the object plane and ≤ 3 mm for the full depth of field (± 0.7 m). The new optical design includes options for in situ calibration of the endoscope transmittance during the experimental campaign, which allows the continuous tracing of possible transmittance degradation with time due to impurity deposition and erosion by fast neutral particles. In parallel to the new optical design, a new type of possibly ITER relevant shutter system based on pneumatic techniques has been developed and integrated into the endoscope head. The endoscope is equipped with four digital CCD cameras, each combined with two filter wheels for narrow band interference and neutral density filters. Additionally, two protection cameras in the λ > 0.95 μm range have been integrated in the optical design for the real time wall protection during the plasma operation of JET.

  1. Advanced density profile reflectometry; the state-of-the-art and measurement prospects for ITER

    NASA Astrophysics Data System (ADS)

    Doyle, E. J.

    2006-10-01

    Dramatic progress in millimeter-wave technology has allowed the realization of a key goal for ITER diagnostics, the routine measurement of the plasma density profile from millimeter-wave radar (reflectometry) measurements. In reflectometry, the measured round-trip group delay of a probe beam reflected from a plasma cutoff is used to infer the density distribution in the plasma. Reflectometer systems implemented by UCLA on a number of devices employ frequency-modulated continuous-wave (FM-CW), ultrawide-bandwidth, high-resolution radar systems. One such system on DIII-D has routinely demonstrated measurements of the density profile over a range of electron density of 0-6.4x10^19,m-3, with ˜25 μs time and ˜4 mm radial resolution, meeting key ITER requirements. This progress in performance was made possible by multiple advances in the areas of millimeter-wave technology, novel measurement techniques, and improved understanding, including: (i) fast sweep, solid-state, wide bandwidth sources and power amplifiers, (ii) dual polarization measurements to expand the density range, (iii) adaptive radar-based data analysis with parallel processing on a Unix cluster, (iv) high memory depth data acquisition, and (v) advances in full wave code modeling. The benefits of advanced system performance will be illustrated using measurements from a wide range of phenomena, including ELM and fast-ion driven mode dynamics, L-H transition studies and plasma-wall interaction. The measurement capabilities demonstrated by these systems provide a design basis for the development of the main ITER profile reflectometer system. This talk will explore the extent to which these reflectometer system designs, results and experience can be translated to ITER, and will identify what new studies and experimental tests are essential.

  2. Iterative initial condition reconstruction

    NASA Astrophysics Data System (ADS)

    Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias

    2017-07-01

    Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.

  3. Development of a mirror-based endoscope for divertor spectroscopy on JET with the new ITER-like wall (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, A.; Brezinsek, S.; Mertens, Ph.

    2012-10-15

    A new endoscope with optimised divertor view has been developed in order to survey and monitor the emission of specific impurities such as tungsten and the remaining carbon as well as beryllium in the tungsten divertor of JET after the implementation of the ITER-like wall in 2011. The endoscope is a prototype for testing an ITER relevant design concept based on reflective optics only. It may be subject to high neutron fluxes as expected in ITER. The operating wavelength range, from 390 nm to 2500 nm, allows the measurements of the emission of all expected impurities (W I, Be II,more » C I, C II, C III) with high optical transmittance ({>=}30% in the designed wavelength range) as well as high spatial resolution that is {<=}2 mm at the object plane and {<=}3 mm for the full depth of field ({+-}0.7 m). The new optical design includes options for in situ calibration of the endoscope transmittance during the experimental campaign, which allows the continuous tracing of possible transmittance degradation with time due to impurity deposition and erosion by fast neutral particles. In parallel to the new optical design, a new type of possibly ITER relevant shutter system based on pneumatic techniques has been developed and integrated into the endoscope head. The endoscope is equipped with four digital CCD cameras, each combined with two filter wheels for narrow band interference and neutral density filters. Additionally, two protection cameras in the {lambda} > 0.95 {mu}m range have been integrated in the optical design for the real time wall protection during the plasma operation of JET.« less

  4. Automated quantitative muscle biopsy analysis system

    NASA Technical Reports Server (NTRS)

    Castleman, Kenneth R. (Inventor)

    1980-01-01

    An automated system to aid the diagnosis of neuromuscular diseases by producing fiber size histograms utilizing histochemically stained muscle biopsy tissue. Televised images of the microscopic fibers are processed electronically by a multi-microprocessor computer, which isolates, measures, and classifies the fibers and displays the fiber size distribution. The architecture of the multi-microprocessor computer, which is iterated to any required degree of complexity, features a series of individual microprocessors P.sub.n each receiving data from a shared memory M.sub.n-1 and outputing processed data to a separate shared memory M.sub.n+1 under control of a program stored in dedicated memory M.sub.n.

  5. Finite-size effects and switching times for Moran process with mutation.

    PubMed

    DeVille, Lee; Galiardi, Meghan

    2017-04-01

    We consider the Moran process with two populations competing under an iterated Prisoner's Dilemma in the presence of mutation, and concentrate on the case where there are multiple evolutionarily stable strategies. We perform a complete bifurcation analysis of the deterministic system which arises in the infinite population size. We also study the Master equation and obtain asymptotics for the invariant distribution and metastable switching times for the stochastic process in the case of large but finite population. We also show that the stochastic system has asymmetries in the form of a skew for parameter values where the deterministic limit is symmetric.

  6. A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2013-01-01

    This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600

  7. CuCrZr alloy microstructure and mechanical properties after hot isostatic pressing bonding cycles

    NASA Astrophysics Data System (ADS)

    Frayssines, P.-E.; Gentzbittel, J.-M.; Guilloud, A.; Bucci, P.; Soreau, T.; Francois, N.; Primaux, F.; Heikkinen, S.; Zacchia, F.; Eaton, R.; Barabash, V.; Mitteau, R.

    2014-04-01

    ITER first wall (FW) panels are a layered structure made of the three following materials: 316L(N) austenitic stainless steel, CuCrZr alloy and beryllium. Two hot isostatic pressing (HIP) cycles are included in the reference fabrication route to bond these materials together for the normal heat flux design supplied by the European Union (EU). This reference fabrication route ensures sufficiently good mechanical properties for the materials and joints, which fulfil the ITER mechanical specifications, but often results in a coarse grain size for the CuCrZr alloy, which is not favourable, especially, for the thermal creep properties of the FW panels. To limit the abnormal grain growth of CuCrZr and make the ITER FW fabrication route more reliable, a study began in 2010 in the EU in the frame of an ITER task agreement. Two material fabrication approaches have been investigated. The first one was dedicated to the fabrication of solid CuCrZr alloy in close collaboration with an industrial copper alloys manufacturer. The second approach investigated was the manufacturing of CuCrZr alloy using the powder metallurgy (PM) route and HIP consolidation. This paper presents the main mechanical and microstructural results associated with the two CuCrZr approaches mentioned above. The mechanical properties of solid CuCrZr, PM CuCrZr and joints (solid CuCrZr/solid CuCrZr and solid CuCrZr/316L(N) and PM CuCrZr/316L(N)) are also presented.

  8. Operating Characteristics in DIII-D ELM-Suppressed RMP H-modes with ITER Similar Shapes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, T E; Fenstermacher, M E; Jakubowski, M

    2008-10-13

    Fast energy transients, incident on the DIII-D divertors due to Type-I edge localized modes (ELMs), are eliminated using small dc currents in a simple set of non-axisymmetric coils that produce edge resonant magnetic perturbations (RMP). In ITER similar shaped (ISS) plasmas, with electron pedestal collisionalities matched to those expected in ITER a sharp resonant window in the safety factor at the 95 percent normalized poloidal flux surface is observed for ELM suppression at q{sub 95}=3.57 with a minimum width {delta}q{sub 95} of {+-}0.05. The size of this resonant window has been increased by a factor of 4 in ISS plasmasmore » by increasing the magnitude of the current in an n=3 coil set along with the current in a separate n=1 coil set. The resonant ELM-suppression window is highly reproducible for a given plasma shape, coil configuration and coil current but can vary with other operating conditions such as {beta}{sub N}. Isolated resonant windows have also been found at other q95 values when using different RMP coil configurations. For example, when the I-coil is operated in an n=3 up-down asymmetric configuration rather than an up-down symmetric configuration a resonant window is found near q{sub 95}=7.4. A Fourier analysis of the applied vacuum magnetic field demonstrates a statistical correlation between the Chirikov island overlap parameter and ELM suppression. These results have been used as a guide for RMP coil design studies in various ITER operating scenarios.« less

  9. GWASinlps: Nonlocal prior based iterative SNP selection tool for genome-wide association studies.

    PubMed

    Sanyal, Nilotpal; Lo, Min-Tzu; Kauppi, Karolina; Djurovic, Srdjan; Andreassen, Ole A; Johnson, Valen E; Chen, Chi-Hua

    2018-06-19

    Multiple marker analysis of the genome-wide association study (GWAS) data has gained ample attention in recent years. However, because of the ultra high-dimensionality of GWAS data, such analysis is challenging. Frequently used penalized regression methods often lead to large number of false positives, whereas Bayesian methods are computationally very expensive. Motivated to ameliorate these issues simultaneously, we consider the novel approach of using nonlocal priors in an iterative variable selection framework. We develop a variable selection method, named, iterative nonlocal prior based selection for GWAS, or GWASinlps, that combines, in an iterative variable selection framework, the computational efficiency of the screen-and-select approach based on some association learning and the parsimonious uncertainty quantification provided by the use of nonlocal priors. The hallmark of our method is the introduction of 'structured screen-and-select' strategy, that considers hierarchical screening, which is not only based on response-predictor associations, but also based on response-response associations, and concatenates variable selection within that hierarchy. Extensive simulation studies with SNPs having realistic linkage disequilibrium structures demonstrate the advantages of our computationally efficient method compared to several frequentist and Bayesian variable selection methods, in terms of true positive rate, false discovery rate, mean squared error, and effect size estimation error. Further, we provide empirical power analysis useful for study design. Finally, a real GWAS data application was considered with human height as phenotype. An R-package for implementing the GWASinlps method is available at https://cran.r-project.org/web/packages/GWASinlps/index.html. Supplementary data are available at Bioinformatics online.

  10. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  11. Analytical Formulation for Sizing and Estimating the Dimensions and Weight of Wind Turbine Hub and Drivetrain Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Parsons, T.; King, R.

    This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less

  12. Simplifications for hydronic system models in modelica

    DOE PAGES

    Jorissen, F.; Wetter, M.; Helsen, L.

    2018-01-12

    Building systems and their heating, ventilation and air conditioning flow networks, are becoming increasingly complex. Some building energy simulation tools simulate these flow networks using pressure drop equations. These flow network models typically generate coupled algebraic nonlinear systems of equations, which become increasingly more difficult to solve as their sizes increase. This leads to longer computation times and can cause the solver to fail. These problems also arise when using the equation-based modelling language Modelica and Annex 60-based libraries. This may limit the applicability of the library to relatively small problems unless problems are restructured. This paper discusses two algebraicmore » loop types and presents an approach that decouples algebraic loops into smaller parts, or removes them completely. The approach is applied to a case study model where an algebraic loop of 86 iteration variables is decoupled into smaller parts with a maximum of five iteration variables.« less

  13. Some modifications of Newton's method for the determination of the steady-state response of nonlinear oscillatory circuits

    NASA Astrophysics Data System (ADS)

    Grosz, F. B., Jr.; Trick, T. N.

    1982-07-01

    It is proposed that nondominant states should be eliminated from the Newton algorithm in the steady-state analysis of nonlinear oscillatory systems. This technique not only improves convergence, but also reduces the size of the sensitivity matrix so that less computation is required for each iteration. One or more periods of integration should be performed after each periodic state estimation before the sensitivity computations are made for the next periodic state estimation. These extra periods of integration between Newton iterations are found to allow the fast states due to parasitic effects to settle, which enables the Newton algorithm to make a better prediction. In addition, the reliability of the algorithm is improved in high Q oscillator circuits by both local and global damping in which the amount of damping is proportional to the difference between the initial and final state values.

  14. Stepwise and stagewise approaches for spatial cluster detection

    PubMed Central

    Xu, Jiale

    2016-01-01

    Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either hypothesis testing framework or Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic area. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power of detections. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. PMID:27246273

  15. Stepwise and stagewise approaches for spatial cluster detection.

    PubMed

    Xu, Jiale; Gangnon, Ronald E

    2016-05-01

    Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either a hypothesis testing framework or a Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with a tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic areas. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. An Iterative O-Methyltransferase Catalyzes 1,11-Dimethylation of Aspergillus fumigatus Fumaric Acid Amides.

    PubMed

    Kalb, Daniel; Heinekamp, Thorsten; Schieferdecker, Sebastian; Nett, Markus; Brakhage, Axel A; Hoffmeister, Dirk

    2016-10-04

    S-adenosyl-l-methionine (SAM)-dependent methyltransfer is a common biosynthetic strategy to modify natural products. We investigated the previously uncharacterized Aspergillus fumigatus methyltransferase FtpM, which is encoded next to the bimodular fumaric acid amide synthetase FtpA. Structure elucidation of two new A. fumigatus natural products, the 1,11-dimethyl esters of fumaryl-l-tyrosine and fumaryl-l-phenylalanine, together with ftpM gene disruption suggested that FtpM catalyzes iterative methylation. Final evidence that a single enzyme repeatedly acts on fumaric acid amides came from an in vitro biochemical investigation with recombinantly produced FtpM. Size-exclusion chromatography indicated that this methyltransferase is active as a dimer. As ftpA and ftpM homologues are found clustered in other fungi, we expect our work will help to identify and annotate natural product biosynthesis genes in various species. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. An iterative glycosyltransferase EntS catalyzes transfer and extension of O- and S-linked monosaccharide in enterocin 96

    PubMed Central

    Nagar, Rupa; Rao, Alka

    2017-01-01

    Abstract Glycosyltransferases are essential tools for in vitro glycoengineering. Bacteria harbor an unexplored variety of protein glycosyltransferases. Here, we describe a peptide glycosyltransferase (EntS) encoded by ORF0417 of Enterococcus faecalis TX0104. EntS di-glycosylates linear peptide of enterocin 96 – a known antibacterial, in vitro. It is capable of transferring as well as extending the glycan onto the peptide in an iterative sequential dissociative manner. It can catalyze multiple linkages: Glc/Gal(-O)Ser/Thr, Glc/Gal(-S)Cys and Glc/Gal(β)Glc/Gal(-O/S)Ser/Thr/Cys, in one pot. Using EntS generated glycovariants of enterocin 96 peptide, size and identity of the glycan are found to influence bioactivity of the peptide. The study identifies EntS as an enzyme worth pursuing, for in vitro peptide glycoengineering. PMID:28498962

  18. Conceptual design and structural analysis for an 8.4-m telescope

    NASA Astrophysics Data System (ADS)

    Mendoza, Manuel; Farah, Alejandro; Ruiz Schneider, Elfego

    2004-09-01

    This paper describes the conceptual design of the optics support structures of a telescope with a primary mirror of 8.4 m, the same size as a Large Binocular Telescope (LBT) primary mirror. The design goal is to achieve a structure for supporting the primary and secondary mirrors and keeping them joined as rigid as possible. With this purpose an optimization with several models was done. This iterative design process includes: specifications development, concepts generation and evaluation. Process included Finite Element Analysis (FEA) as well as other analytical calculations. Quality Function Deployment (QFD) matrix was used to obtain telescope tube and spider specifications. Eight spiders and eleven tubes geometric concepts were proposed. They were compared in decision matrixes using performance indicators and parameters. Tubes and spiders went under an iterative optimization process. The best tubes and spiders concepts were assembled together. All assemblies were compared and ranked according to their performance.

  19. On nonlinear finite element analysis in single-, multi- and parallel-processors

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R.; Islam, M.; Salama, M.

    1982-01-01

    Numerical solution of nonlinear equilibrium problems of structures by means of Newton-Raphson type iterations is reviewed. Each step of the iteration is shown to correspond to the solution of a linear problem, therefore the feasibility of the finite element method for nonlinear analysis is established. Organization and flow of data for various types of digital computers, such as single-processor/single-level memory, single-processor/two-level-memory, vector-processor/two-level-memory, and parallel-processors, with and without sub-structuring (i.e. partitioning) are given. The effect of the relative costs of computation, memory and data transfer on substructuring is shown. The idea of assigning comparable size substructures to parallel processors is exploited. Under Cholesky type factorization schemes, the efficiency of parallel processing is shown to decrease due to the occasional shared data, just as that due to the shared facilities.

  20. Calculation method of water injection forward modeling and inversion process in oilfield water injection network

    NASA Astrophysics Data System (ADS)

    Liu, Long; Liu, Wei

    2018-04-01

    A forward modeling and inversion algorithm is adopted in order to determine the water injection plan in the oilfield water injection network. The main idea of the algorithm is shown as follows: firstly, the oilfield water injection network is inversely calculated. The pumping station demand flow is calculated. Then, forward modeling calculation is carried out for judging whether all water injection wells meet the requirements of injection allocation or not. If all water injection wells meet the requirements of injection allocation, calculation is stopped, otherwise the demand injection allocation flow rate of certain step size is reduced aiming at water injection wells which do not meet requirements, and next iterative operation is started. It is not necessary to list the algorithm into water injection network system algorithm, which can be realized easily. Iterative method is used, which is suitable for computer programming. Experimental result shows that the algorithm is fast and accurate.

  1. LCAMP: Location Constrained Approximate Message Passing for Compressed Sensing MRI

    PubMed Central

    Sung, Kyunghyun; Daniel, Bruce L; Hargreaves, Brian A

    2016-01-01

    Iterative thresholding methods have been extensively studied as faster alternatives to convex optimization methods for solving large-sized problems in compressed sensing. A novel iterative thresholding method called LCAMP (Location Constrained Approximate Message Passing) is presented for reducing computational complexity and improving reconstruction accuracy when a nonzero location (or sparse support) constraint can be obtained from view shared images. LCAMP modifies the existing approximate message passing algorithm by replacing the thresholding stage with a location constraint, which avoids adjusting regularization parameters or thresholding levels. This work is first compared with other conventional reconstruction methods using random 1D signals and then applied to dynamic contrast-enhanced breast MRI to demonstrate the excellent reconstruction accuracy (less than 2% absolute difference) and low computation time (5 - 10 seconds using Matlab) with highly undersampled 3D data (244 × 128 × 48; overall reduction factor = 10). PMID:23042658

  2. Enhancement of runaway production by resonant magnetic perturbation on J-TEXT

    NASA Astrophysics Data System (ADS)

    Chen, Z. Y.; Huang, D. W.; Izzo, V. A.; Tong, R. H.; Jiang, Z. H.; Hu, Q. M.; Wei, Y. N.; Yan, W.; Rao, B.; Wang, S. Y.; Ma, T. K.; Li, S. C.; Yang, Z. J.; Ding, D. H.; Wang, Z. J.; Zhang, M.; Zhuang, G.; Pan, Y.; J-TEXT Team

    2016-07-01

    The suppression of runaways following disruptions is key for the safe operation of ITER. The massive gas injection (MGI) has been developed to mitigate heat loads, electromagnetic forces and runaway electrons (REs) during disruptions. However, MGI may not completely prevent the generation of REs during disruptions on ITER. Resonant magnetic perturbation (RMP) has been applied to suppress runaway generation during disruptions on several machines. It was found that strong RMP results in the enhancement of runaway production instead of runaway suppression on J-TEXT. The runaway current was about 50% pre-disruption plasma current in argon induced reference disruptions. With moderate RMP, the runway current decreased to below 30% pre-disruption plasma current. The runaway current plateaus reach 80% of the pre-disruptive current when strong RMP was applied. Strong RMP may induce large size magnetic islands that could confine more runaway seed during disruptions. This has important implications for runaway suppression on large machines.

  3. High density operation for reactor-relevant power exhaust

    NASA Astrophysics Data System (ADS)

    Wischmeier, M.; ASDEX Upgrade Team; Jet Efda Contributors

    2015-08-01

    With increasing size of a tokamak device and associated fusion power gain an increasing power flux density towards the divertor needs to be handled. A solution for handling this power flux is crucial for a safe and economic operation. Using purely geometric arguments in an ITER-like divertor this power flux can be reduced by approximately a factor 100. Based on a conservative extrapolation of current technology for an integrated engineering approach to remove power deposited on plasma facing components a further reduction of the power flux density via volumetric processes in the plasma by up to a factor of 50 is required. Our current ability to interpret existing power exhaust scenarios using numerical transport codes is analyzed and an operational scenario as a potential solution for ITER like divertors under high density and highly radiating reactor-relevant conditions is presented. Alternative concepts for risk mitigation as well as strategies for moving forward are outlined.

  4. A geochemical transport model for redox-controlled movement of mineral fronts in groundwater flow systems: A case of nitrate removal by oxidation of pyrite

    USGS Publications Warehouse

    Engesgaard, Peter; Kipp, Kenneth L.

    1992-01-01

    A one-dimensional prototype geochemical transport model was developed in order to handle simultaneous precipitation-dissolution and oxidation-reduction reactions governed by chemical equilibria. Total aqueous component concentrations are the primary dependent variables, and a sequential iterative approach is used for the calculation. The model was verified by analytical and numerical comparisons and is able to simulate sharp mineral fronts. At a site in Denmark, denitrification has been observed by oxidation of pyrite. Simulation of nitrate movement at this site showed a redox front movement rate of 0.58 m yr−1, which agreed with calculations of others. It appears that the sequential iterative approach is the most practical for extension to multidimensional simulation and for handling large numbers of components and reactions. However, slow convergence may limit the size of redox systems that can be handled.

  5. Fourier-Accelerated Nodal Solvers (FANS) for homogenization problems

    NASA Astrophysics Data System (ADS)

    Leuschner, Matthias; Fritzen, Felix

    2017-11-01

    Fourier-based homogenization schemes are useful to analyze heterogeneous microstructures represented by 2D or 3D image data. These iterative schemes involve discrete periodic convolutions with global ansatz functions (mostly fundamental solutions). The convolutions are efficiently computed using the fast Fourier transform. FANS operates on nodal variables on regular grids and converges to finite element solutions. Compared to established Fourier-based methods, the number of convolutions is reduced by FANS. Additionally, fast iterations are possible by assembling the stiffness matrix. Due to the related memory requirement, the method is best suited for medium-sized problems. A comparative study involving established Fourier-based homogenization schemes is conducted for a thermal benchmark problem with a closed-form solution. Detailed technical and algorithmic descriptions are given for all methods considered in the comparison. Furthermore, many numerical examples focusing on convergence properties for both thermal and mechanical problems, including also plasticity, are presented.

  6. Counterrotating prop-fan simulations which feature a relative-motion multiblock grid decomposition enabling arbitrary time-steps

    NASA Technical Reports Server (NTRS)

    Janus, J. Mark; Whitfield, David L.

    1990-01-01

    Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.

  7. Considerations For Distributed Short Takeoff Vertical Landing (STOVL) Operations

    DTIC Science & Technology

    2015-05-18

    intended to confuse the enemy through non-linear operations by creating a network of highly-capable battalion to squad sized units spread across the...the next iteration was named Enhanced Company Operations ( ECO ). Here, the infantry Company became the focus of the distributed concept... friendly battlefield against the MAGTF commander’s idea. As events progress over time, the enemy can eventually target the remaining defended M-FARPs and

  8. Morphing Aircraft Structures: Research in AFRL/RB

    DTIC Science & Technology

    2008-09-01

    various iterative steps in the process, etc. The solver also internally controls the step size for integration, as this is independent of the step...Coupling of Substructures for Dynamic Analyses,” AIAA Journal , Vol. 6, No. 7, 1968, pp. 1313-1319. 2“Using the State-Dependent Modal Force (MFORCE),” AFL...an actuation system consisting of multiple internal actuators, centrally computer controlled to implement any commanded morphing configuration; and

  9. Design and Development of a User Interface for the Dynamic Model of Software Project Management.

    DTIC Science & Technology

    1988-03-01

    rectory of the user’s choice for future...the last choice selected. Let us assume for the sake of this tour that the user has selected all eight choices . ESTIMATED ACTUAL PROJECT SIZE DEFINITION...manipulation of varaibles in the * •. TJin~ca model "h ... ser Inter ace for the Dynamica model was designed b in iterative process of prototyping

  10. Computer-Aided Design Of Turbine Blades And Vanes

    NASA Technical Reports Server (NTRS)

    Hsu, Wayne Q.

    1988-01-01

    Quasi-three-dimensional method for determining aerothermodynamic configuration of turbine uses computer-interactive analysis and design and computer-interactive graphics. Design procedure executed rapidly so designer easily repeats it to arrive at best performance, size, structural integrity, and engine life. Sequence of events in aerothermodynamic analysis and design starts with engine-balance equations and ends with boundary-layer analysis and viscous-flow calculations. Analysis-and-design procedure interactive and iterative throughout.

  11. Optimized Dose Distribution of Gammamed Plus Vaginal Cylinders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Supe, Sanjay S.; Bijina, T.K.; Varatharaj, C.

    2009-04-01

    Endometrial carcinoma is the most common malignancy arising in the female genital tract. Intracavitary vaginal cuff irradiation may be given alone or with external beam irradiation in patients determined to be at risk for locoregional recurrence. Vaginal cylinders are often used to deliver a brachytherapy dose to the vaginal apex and upper vagina or the entire vaginal surface in the management of postoperative endometrial cancer or cervical cancer. The dose distributions of HDR vaginal cylinders must be evaluated carefully, so that clinical experiences with LDR techniques can be used in guiding optimal use of HDR techniques. The aim of thismore » study was to optimize dose distribution for Gammamed plus vaginal cylinders. Placement of dose optimization points was evaluated for its effect on optimized dose distributions. Two different dose optimization point models were used in this study, namely non-apex (dose optimization points only on periphery of cylinder) and apex (dose optimization points on periphery and along the curvature including the apex points). Thirteen dwell positions were used for the HDR dosimetry to obtain a 6-cm active length. Thus 13 optimization points were available at the periphery of the cylinder. The coordinates of the points along the curvature depended on the cylinder diameters and were chosen for each cylinder so that four points were distributed evenly in the curvature portion of the cylinder. Diameter of vaginal cylinders varied from 2.0 to 4.0 cm. Iterative optimization routine was utilized for all optimizations. The effects of various optimization routines (iterative, geometric, equal times) was studied for the 3.0-cm diameter vaginal cylinder. The effect of source travel step size on the optimized dose distributions for vaginal cylinders was also evaluated. All optimizations in this study were carried for dose of 6 Gy at dose optimization points. For both non-apex and apex models of vaginal cylinders, doses for apex point and three dome points were higher for the apex model compared with the non-apex model. Mean doses to the optimization points for both the cylinder models and all the cylinder diameters were 6 Gy, matching with the prescription dose of 6 Gy. Iterative optimization routine resulted in the highest dose to apex point and dome points. The mean dose for optimization point was 6.01 Gy for iterative optimization and was much higher than 5.74 Gy for geometric and equal times routines. Step size of 1 cm gave the highest dose to the apex point. This step size was superior in terms of mean dose to optimization points. Selection of dose optimization points for the derivation of optimized dose distributions for vaginal cylinders affects the dose distributions.« less

  12. Iterative optimizing quantization method for reconstructing three-dimensional images from a limited number of views

    DOEpatents

    Lee, H.R.

    1997-11-18

    A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object. 5 figs.

  13. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  14. Modeling of the ITER-like wide-angle infrared thermography view of JET.

    PubMed

    Aumeunier, M-H; Firdaouss, M; Travère, J-M; Loarer, T; Gauthier, E; Martin, V; Chabaud, D; Humbert, E

    2012-10-01

    Infrared (IR) thermography systems are mandatory to ensure safe plasma operation in fusion devices. However, IR measurements are made much more complicated in metallic environment because of the spurious contributions of the reflected fluxes. This paper presents a full predictive photonic simulation able to assess accurately the surface temperature measurement with classical IR thermography from a given plasma scenario and by taking into account the optical properties of PFCs materials. This simulation has been carried out the ITER-like wide angle infrared camera view of JET in comparing with experimental data. The consequences and the effects of the low emissivity and the bidirectional reflectivity distribution function used in the model for the metallic PFCs on the contribution of the reflected flux in the analysis are discussed.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaupa, M., E-mail: matteo.zaupa@igi.cnr.it; Consorzio RFX, Corso Stati Uniti 4, Padova 35127; Sartori, E.

    Megavolt ITER Injector Concept Advancement is the full scale prototype of the heating and current drive neutral beam injectors for ITER, to be built at Consorzio RFX (Padova). The engineering design of its components is challenging: the total heat loads they will be subjected to (expected between 2 and 19 MW), the high heat fluxes (up to 20 MW/m{sup 2}), and the beam pulse duration up to 1 h, set demanding requirements for reliable active cooling circuits. In support of the design, the thermo-hydraulic behavior of each cooling circuit under steady state condition has been investigated by using one-dimensional models.more » The final results, obtained considering a number of optimizations for the cooling circuits, show that all the requirements in terms of flow rate, temperature, and pressure drop are properly fulfilled.« less

  16. A stochastic differential equations approach for the description of helium bubble size distributions in irradiated metals

    NASA Astrophysics Data System (ADS)

    Seif, Dariush; Ghoniem, Nasr M.

    2014-12-01

    A rate theory model based on the theory of nonlinear stochastic differential equations (SDEs) is developed to estimate the time-dependent size distribution of helium bubbles in metals under irradiation. Using approaches derived from Itô's calculus, rate equations for the first five moments of the size distribution in helium-vacancy space are derived, accounting for the stochastic nature of the atomic processes involved. In the first iteration of the model, the distribution is represented as a bivariate Gaussian distribution. The spread of the distribution about the mean is obtained by white-noise terms in the second-order moments, driven by fluctuations in the general absorption and emission of point defects by bubbles, and fluctuations stemming from collision cascades. This statistical model for the reconstruction of the distribution by its moments is coupled to a previously developed reduced-set, mean-field, rate theory model. As an illustrative case study, the model is applied to a tungsten plasma facing component under irradiation. Our findings highlight the important role of stochastic atomic fluctuations on the evolution of helium-vacancy cluster size distributions. It is found that when the average bubble size is small (at low dpa levels), the relative spread of the distribution is large and average bubble pressures may be very large. As bubbles begin to grow in size, average bubble pressures decrease, and stochastic fluctuations have a lessened effect. The distribution becomes tighter as it evolves in time, corresponding to a more uniform bubble population. The model is formulated in a general way, capable of including point defect drift due to internal temperature and/or stress gradients. These arise during pulsed irradiation, and also during steady irradiation as a result of externally applied or internally generated non-homogeneous stress fields. Discussion is given into how the model can be extended to include full spatial resolution and how the implementation of a path-integral approach may proceed if the distribution is known experimentally to significantly stray from a Gaussian description.

  17. 16 CFR 1509.7 - Hardware.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... NON-FULL-SIZE BABY CRIBS § 1509.7 Hardware. (a) The hardware in a non-full-size baby crib shall be... abuse. (b) Non-full-size baby cribs shall incorporate locking or latching devices for dropsides or... non-full-size baby crib. ...

  18. Solution Methods for 3D Tomographic Inversion Using A Highly Non-Linear Ray Tracer

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Ballard, S.; Young, C. J.; Chang, M.

    2008-12-01

    To develop 3D velocity models to improve nuclear explosion monitoring capability, we have developed a 3D tomographic modeling system that traces rays using an implementation of the Um and Thurber ray pseudo- bending approach, with full enforcement of Snell's Law in 3D at the major discontinuities. Due to the highly non-linear nature of the ray tracer, however, we are forced to substantially damp the inversion in order to converge on a reasonable model. Unfortunately the amount of damping is not known a priori and can significantly extend the number of calls of the computationally expensive ray-tracer and the least squares matrix solver. If the damping term is too small the solution step-size produces either an un-realistic model velocity change or places the solution in or near a local minimum from which extrication is nearly impossible. If the damping term is too large, convergence can be very slow or premature convergence can occur. Standard approaches involve running inversions with a suite of damping parameters to find the best model. A better solution methodology is to take advantage of existing non-linear solution techniques such as Levenberg-Marquardt (LM) or quasi-newton iterative solvers. In particular, the LM algorithm was specifically designed to find the minimum of a multi-variate function that is expressed as the sum of squares of non-linear real-valued functions. It has become a standard technique for solving non-linear least squared problems, and is widely adopted in a broad spectrum of disciplines, including the geosciences. At each iteration, the LM approach dynamically varies the level of damping to optimize convergence. When the current estimate of the solution is far from the ultimate solution LM behaves as a steepest decent method, but transitions to Gauss- Newton behavior, with near quadratic convergence, as the estimate approaches the final solution. We show typical linear solution techniques and how they can lead to local minima if the damping is set too low. We also describe the LM technique and show how it automatically determines the appropriate damping factor as it iteratively converges on the best solution. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000.

  19. Board Saver for Use with Developmental FPGAs

    NASA Technical Reports Server (NTRS)

    Berkun, Andrew

    2009-01-01

    A device denoted a board saver has been developed as a means of reducing wear and tear of a printed-circuit board onto which an antifuse field programmable gate array (FPGA) is to be eventually soldered permanently after a number of design iterations. The need for the board saver or a similar device arises because (1) antifuse-FPGA design iterations are common and (2) repeated soldering and unsoldering of FPGAs on the printed-circuit board to accommodate design iterations can wear out the printed-circuit board. The board saver is basically a solderable/unsolderable FPGA receptacle that is installed temporarily on the printed-circuit board. The board saver is, more specifically, a smaller, square-ring-shaped, printed-circuit board (see figure) that contains half via holes one for each contact pad along its periphery. As initially fabricated, the board saver is a wider ring containing full via holes, but then it is milled along its outer edges, cutting the via holes in half and laterally exposing their interiors. The board saver is positioned in registration with the designated FPGA footprint and each via hole is soldered to the outer portion of the corresponding FPGA contact pad on the first-mentioned printed-circuit board. The via-hole/contact joints can be inspected visually and can be easily unsoldered later. The square hole in the middle of the board saver is sized to accommodate the FPGA, and the thickness of the board saver is the same as that of the FPGA. Hence, when a non-final FPGA is placed in the square hole, the combination of the non-final FPGA and the board saver occupy no more area and thickness than would a final FPGA soldered directly into its designated position on the first-mentioned circuit board. The contact leads of a non-final FPGA are not bent and are soldered, at the top of the board saver, to the corresponding via holes. A non-final FPGA can readily be unsoldered from the board saver and replaced by another one. Once the final FPGA design has been determined, the board saver can be unsoldered from the contact pads on the first-mentioned printed-circuit board and replaced by the final FPGA.

  20. Using Performance Tasks to Improve Quantitative Reasoning in an Introductory Mathematics Course

    ERIC Educational Resources Information Center

    Kruse, Gerald; Drews, David

    2013-01-01

    A full-cycle assessment of our efforts to improve quantitative reasoning in an introductory math course is described. Our initial iteration substituted more open-ended performance tasks for the active learning projects than had been used. Using a quasi-experimental design, we compared multiple sections of the same course and found non-significant…

  1. A noise power spectrum study of a new model-based iterative reconstruction system: Veo 3.0.

    PubMed

    Li, Guang; Liu, Xinming; Dodge, Cristina T; Jensen, Corey T; Rong, X John

    2016-09-08

    The purpose of this study was to evaluate performance of the third generation of model-based iterative reconstruction (MBIR) system, Veo 3.0, based on noise power spectrum (NPS) analysis with various clinical presets over a wide range of clinically applicable dose levels. A CatPhan 600 surrounded by an oval, fat-equivalent ring to mimic patient size/shape was scanned 10 times at each of six dose levels on a GE HD 750 scanner. NPS analysis was performed on images reconstructed with various Veo 3.0 preset combinations for comparisons of those images reconstructed using Veo 2.0, filtered back projection (FBP) and adaptive statistical iterative reconstruc-tion (ASiR). The new Target Thickness setting resulted in higher noise in thicker axial images. The new Texture Enhancement function achieved a more isotropic noise behavior with less image artifacts. Veo 3.0 provides additional reconstruction options designed to allow the user choice of balance between spatial resolution and image noise, relative to Veo 2.0. Veo 3.0 provides more user selectable options and in general improved isotropic noise behavior in comparison to Veo 2.0. The overall noise reduction performance of both versions of MBIR was improved in comparison to FBP and ASiR, especially at low-dose levels. © 2016 The Authors.

  2. Numerical Study of High Heat Flux Performances of Flat-Tile Divertor Mock-ups with Hypervapotron Cooling Concept

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Liu, Xiang; Lian, Youyun; Cai, Laizhong

    2015-09-01

    The hypervapotron (HV), as an enhanced heat transfer technique, will be used for ITER divertor components in the dome region as well as the enhanced heat flux first wall panels. W-Cu brazing technology has been developed at SWIP (Southwestern Institute of Physics), and one W/CuCrZr/316LN component of 450 mm×52 mm×166 mm with HV cooling channels will be fabricated for high heat flux (HHF) tests. Before that a relevant analysis was carried out to optimize the structure of divertor component elements. ANSYS-CFX was used in CFD analysis and ABAQUS was adopted for thermal-mechanical calculations. Commercial code FE-SAFE was adopted to compute the fatigue life of the component. The tile size, thickness of tungsten tiles and the slit width among tungsten tiles were optimized and its HHF performances under International Thermonuclear Experimental Reactor (ITER) loading conditions were simulated. One brand new tokamak HL-2M with advanced divertor configuration is under construction in SWIP, where ITER-like flat-tile divertor components are adopted. This optimized design is expected to supply valuable data for HL-2M tokamak. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2011GB110001 and 2011GB110004)

  3. Progress of the ELISE test facility: towards one hour pulses in hydrogen

    NASA Astrophysics Data System (ADS)

    Wünderlich, D.; Fantz, U.; Heinemann, B.; Kraus, W.; Riedl, R.; Wimmer, C.; the NNBI Team

    2016-10-01

    In order to fulfil the ITER requirements, the negative hydrogen ion source used for NBI has to deliver a high source performance, i.e. a high extracted negative ion current and simultaneously a low co-extracted electron current over a pulse length up to 1 h. Negative ions will be generated by the surface process in a low-temperature low-pressure hydrogen or deuterium plasma. Therefore, a certain amount of caesium has to be deposited on the plasma grid in order to obtain a low surface work function and consequently a high negative ion production yield. This caesium is re-distributed by the influence of the plasma, resulting in temporal instabilities of the extracted negative ion current and the co-extracted electrons over long pulses. This paper describes experiments performed in hydrogen operation at the half-ITER-size NNBI test facility ELISE in order to develop a caesium conditioning technique for more stable long pulses at an ITER relevant filling pressure of 0.3 Pa. A significant improvement of the long pulse stability is achieved. Together with different plasma diagnostics it is demonstrated that this improvement is correlated to the interplay of very small variations of parameters like the electrostatic potential and the particle densities close to the extraction system.

  4. Application of reflectometry power flow for magnetic field pitch angle measurements in tokamak plasmas (invited).

    PubMed

    Gourdain, P-A; Peebles, W A

    2008-10-01

    Reflectometry has successfully demonstrated measurements of many important parameters in high temperature tokamak fusion plasmas. However, implementing such capabilities in a high-field, large plasma, such as ITER, will be a significant challenge. In ITER, the ratio of plasma size (meters) to the required reflectometry source wavelength (millimeters) is significantly larger than in existing fusion experiments. This suggests that the flow of the launched reflectometer millimeter-wave power can be realistically analyzed using three-dimensional ray tracing techniques. The analytical and numerical studies presented will highlight the fact that the group velocity (or power flow) of the launched microwaves is dependent on the direction of wave propagation relative to the internal magnetic field. It is shown that this dependence strongly modifies power flow near the cutoff layer in a manner that embeds the local magnetic field direction in the "footprint" of the power returned toward the launch antenna. It will be shown that this can potentially be utilized to locally determine the magnetic field pitch angle at the cutoff location. The resultant beam drift and distortion due to magnetic field and relativistic effects also have significant consequences on the design of reflectometry systems for large, high-field fusion experiments. These effects are discussed in the context of the upcoming ITER burning plasma experiment.

  5. Detection of mouse liver cancer via a parallel iterative shrinkage method in hybrid optical/microcomputed tomography imaging

    NASA Astrophysics Data System (ADS)

    Wu, Ping; Liu, Kai; Zhang, Qian; Xue, Zhenwen; Li, Yongbao; Ning, Nannan; Yang, Xin; Li, Xingde; Tian, Jie

    2012-12-01

    Liver cancer is one of the most common malignant tumors worldwide. In order to enable the noninvasive detection of small liver tumors in mice, we present a parallel iterative shrinkage (PIS) algorithm for dual-modality tomography. It takes advantage of microcomputed tomography and multiview bioluminescence imaging, providing anatomical structure and bioluminescence intensity information to reconstruct the size and location of tumors. By incorporating prior knowledge of signal sparsity, we associate some mathematical strategies including specific smooth convex approximation, an iterative shrinkage operator, and affine subspace with the PIS method, which guarantees the accuracy, efficiency, and reliability for three-dimensional reconstruction. Then an in vivo experiment on the bead-implanted mouse has been performed to validate the feasibility of this method. The findings indicate that a tiny lesion less than 3 mm in diameter can be localized with a position bias no more than 1 mm the computational efficiency is one to three orders of magnitude faster than the existing algorithms; this approach is robust to the different regularization parameters and the lp norms. Finally, we have applied this algorithm to another in vivo experiment on an HCCLM3 orthotopic xenograft mouse model, which suggests the PIS method holds the promise for practical applications of whole-body cancer detection.

  6. Parallel Preconditioning for CFD Problems on the CM-5

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.

  7. Time-to-burnout data for a prototypical ITER divertor tube during a simulated loss of flow accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, T.D.; Watson, R.D.; McDonald, J.M.

    The Loss of Flow Accident (LOFA) is a serious safety concern for the International Thermonuclear Experimental Reactor (ITER) as it has been suggested that greater than 100 seconds are necessary to safely shutdown the plasma when ITER is operating at full power. In this experiment, the thermal response of a prototypical ITER divertor tube during a simulated LOFA was studied. The divertor tube was fabricated from oxygen-free high-conductivity copper to have a square geometry with a circular coolant channel. The coolant channel inner diameter was 0.77 cm, the heated length was 4.0 cm, and the heated width was 1.6 cm.more » The mockup did not feature any flow enhancement techniques, i.e., swirl tape, helical coils, or internal fins. One-sided surface heating of the mockup was accomplished through the use of the 30 kW Sandia Electron Beam Test System. After reaching steady state temperatures in the mockup, as determined by two Type-K thermocouples installed 0.5 mm beneath the heated surface, the coolant pump was manually tripped off and the coolant flow allowed to naturally coast down. Electron beam heating continued after the pump trip until the divertor tube`s heated surface exhibited the high temperature transient normally indicative of rapidly approaching burnout. Experimental data showed that time-to-burnout increases proportionally with increasing inlet velocity and decreases proportionally with increasing incident heat flux.« less

  8. Conceptual design of the radial gamma ray spectrometers system for α particle and runaway electron measurements at ITER

    NASA Astrophysics Data System (ADS)

    Nocente, M.; Tardocchi, M.; Barnsley, R.; Bertalot, L.; Brichard, B.; Croci, G.; Brolatti, G.; Di Pace, L.; Fernandes, A.; Giacomelli, L.; Lengar, I.; Moszynski, M.; Krasilnikov, V.; Muraro, A.; Pereira, R. C.; Perelli Cippo, E.; Rigamonti, D.; Rebai, M.; Rzadkiewicz, J.; Salewski, M.; Santosh, P.; Sousa, J.; Zychor, I.; Gorini, G.

    2017-07-01

    We here present the principles and main physics capabilities behind the design of the radial gamma ray spectrometers (RGRS) system for alpha particle and runaway electron measurements at ITER. The diagnostic benefits from recent advances in gamma-ray spectrometry for tokamak plasmas and combines space and high energy resolution in a single device. The RGRS system as designed can provide information on α ~ particles on a time scale of 1/10 of the slowing down time for the ITER 500 MW full power DT scenario. Spectral observations of the 3.21 and 4.44 MeV peaks from the 9\\text{Be}≤ft(α,nγ \\right){{}12}\\text{C} reaction make the measurements sensitive to α ~ particles at characteristic resonant energies and to possible anisotropies of their slowing down distribution function. An independent assessment of the neutron rate by gamma-ray emission is also feasible. In case of runaway electrons born in disruptions with a typical duration of 100 ms, a time resolution of at least 10 ms for runaway electron studies can be achieved depending on the scenario and down to a current of 40 kA by use of external gas injection. We find that the bremsstrahlung spectrum in the MeV range from confined runaways is sensitive to the electron velocity space up to E≈ 30 -40 MeV, which allows for measurements of the energy distribution of the runaway electrons at ITER.

  9. A phenology of the evolution of endothermy in birds and mammals.

    PubMed

    Lovegrove, Barry G

    2017-05-01

    Recent palaeontological data and novel physiological hypotheses now allow a timescaled reconstruction of the evolution of endothermy in birds and mammals. A three-phase iterative model describing how endothermy evolved from Permian ectothermic ancestors is presented. In Phase One I propose that the elevation of endothermy - increased metabolism and body temperature (T b ) - complemented large-body-size homeothermy during the Permian and Triassic in response to the fitness benefits of enhanced embryo development (parental care) and the activity demands of conquering dry land. I propose that Phase Two commenced in the Late Triassic and Jurassic and was marked by extreme body-size miniaturization, the evolution of enhanced body insulation (fur and feathers), increased brain size, thermoregulatory control, and increased ecomorphological diversity. I suggest that Phase Three occurred during the Cretaceous and Cenozoic and involved endothermic pulses associated with the evolution of muscle-powered flapping flight in birds, terrestrial cursoriality in mammals, and climate adaptation in response to Late Cenozoic cooling in both birds and mammals. Although the triphasic model argues for an iterative evolution of endothermy in pulses throughout the Mesozoic and Cenozoic, it is also argued that endothermy was potentially abandoned at any time that a bird or mammal did not rely upon its thermal benefits for parental care or breeding success. The abandonment would have taken the form of either hibernation or daily torpor as observed in extant endotherms. Thus torpor and hibernation are argued to be as ancient as the origins of endothermy itself, a plesiomorphic characteristic observed today in many small birds and mammals. © 2016 Cambridge Philosophical Society.

  10. ACT Payload Shroud Structural Concept Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Zalewski, Bart B.; Bednarcyk, Brett A.

    2010-01-01

    Aerospace structural applications demand a weight efficient design to perform in a cost effective manner. This is particularly true for launch vehicle structures, where weight is the dominant design driver. The design process typically requires many iterations to ensure that a satisfactory minimum weight has been obtained. Although metallic structures can be weight efficient, composite structures can provide additional weight savings due to their lower density and additional design flexibility. This work presents structural analysis and weight optimization of a composite payload shroud for NASA s Ares V heavy lift vehicle. Two concepts, which were previously determined to be efficient for such a structure are evaluated: a hat stiffened/corrugated panel and a fiber reinforced foam sandwich panel. A composite structural optimization code, HyperSizer, is used to optimize the panel geometry, composite material ply orientations, and sandwich core material. HyperSizer enables an efficient evaluation of thousands of potential designs versus multiple strength and stability-based failure criteria across multiple load cases. HyperSizer sizing process uses a global finite element model to obtain element forces, which are statistically processed to arrive at panel-level design-to loads. These loads are then used to analyze each candidate panel design. A near optimum design is selected as the one with the lowest weight that also provides all positive margins of safety. The stiffness of each newly sized panel or beam component is taken into account in the subsequent finite element analysis. Iteration of analysis/optimization is performed to ensure a converged design. Sizing results for the hat stiffened panel concept and the fiber reinforced foam sandwich concept are presented.

  11. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery.

    PubMed

    Hashemi, SayedMasoud; Song, William Y; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G; Ruschin, Mark

    2017-04-07

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm -1 which was increased to 1.2 mm -1 by SDIR, at half maximum.

  12. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery

    NASA Astrophysics Data System (ADS)

    Hashemi, SayedMasoud; Song, William Y.; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G.; Ruschin, Mark

    2017-04-01

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm-1 which was increased to 1.2 mm-1 by SDIR, at half maximum.

  13. Radiofrequency pulse design using nonlinear gradient magnetic fields.

    PubMed

    Kopanoglu, Emre; Constable, R Todd

    2015-09-01

    An iterative k-space trajectory and radiofrequency (RF) pulse design method is proposed for excitation using nonlinear gradient magnetic fields. The spatial encoding functions (SEFs) generated by nonlinear gradient fields are linearly dependent in Cartesian coordinates. Left uncorrected, this may lead to flip angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a matching pursuit algorithm, and the RF pulse is designed using a conjugate gradient algorithm. Three variants of the proposed approach are given: the full algorithm, a computationally cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. The method is compared with other iterative (matching pursuit and conjugate gradient) and noniterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity. An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. © 2014 Wiley Periodicals, Inc.

  14. Multistep-Ahead Air Passengers Traffic Prediction with Hybrid ARIMA-SVMs Models

    PubMed Central

    Ming, Wei; Xiong, Tao

    2014-01-01

    The hybrid ARIMA-SVMs prediction models have been established recently, which take advantage of the unique strength of ARIMA and SVMs models in linear and nonlinear modeling, respectively. Built upon this hybrid ARIMA-SVMs models alike, this study goes further to extend them into the case of multistep-ahead prediction for air passengers traffic with the two most commonly used multistep-ahead prediction strategies, that is, iterated strategy and direct strategy. Additionally, the effectiveness of data preprocessing approaches, such as deseasonalization and detrending, is investigated and proofed along with the two strategies. Real data sets including four selected airlines' monthly series were collected to justify the effectiveness of the proposed approach. Empirical results demonstrate that the direct strategy performs better than iterative one in long term prediction case while iterative one performs better in the case of short term prediction. Furthermore, both deseasonalization and detrending can significantly improve the prediction accuracy for both strategies, indicating the necessity of data preprocessing. As such, this study contributes as a full reference to the planners from air transportation industries on how to tackle multistep-ahead prediction tasks in the implementation of either prediction strategy. PMID:24723814

  15. Development of a real-time system for ITER first wall heat load control

    NASA Astrophysics Data System (ADS)

    Anand, Himank; de Vries, Peter; Gribov, Yuri; Pitts, Richard; Snipes, Joseph; Zabeo, Luca

    2017-10-01

    The steady state heat flux on the ITER first wall (FW) panels are limited by the heat removal capacity of the water cooling system. In case of off-normal events (e.g. plasma displacement during H-L transitions), the heat loads are predicted to exceed the design limits (2-4.7 MW/m2). Intense heat loads are predicted on the FW, even well before the burning plasma phase. Thus, a real-time (RT) FW heat load control system is mandatory from early plasma operation of the ITER tokamak. A heat load estimator based on the RT equilibrium reconstruction has been developed for the plasma control system (PCS). A scheme, estimating the energy state for prescribed gaps defined as the distance between the last closed flux surface (LCFS)/separatrix and the FW is presented. The RT energy state is determined by the product of a weighted function of gap distance and the power crossing the plasma boundary. In addition, a heat load estimator assuming a simplified FW geometry and parallel heat transport model in the scrape-off layer (SOL), benchmarked against a full 3-D magnetic field line tracer is also presented.

  16. Seismic waveform tomography with shot-encoding using a restarted L-BFGS algorithm.

    PubMed

    Rao, Ying; Wang, Yanghua

    2017-08-17

    In seismic waveform tomography, or full-waveform inversion (FWI), one effective strategy used to reduce the computational cost is shot-encoding, which encodes all shots randomly and sums them into one super shot to significantly reduce the number of wavefield simulations in the inversion. However, this process will induce instability in the iterative inversion regardless of whether it uses a robust limited-memory BFGS (L-BFGS) algorithm. The restarted L-BFGS algorithm proposed here is both stable and efficient. This breakthrough ensures, for the first time, the applicability of advanced FWI methods to three-dimensional seismic field data. In a standard L-BFGS algorithm, if the shot-encoding remains unchanged, it will generate a crosstalk effect between different shots. This crosstalk effect can only be suppressed by employing sufficient randomness in the shot-encoding. Therefore, the implementation of the L-BFGS algorithm is restarted at every segment. Each segment consists of a number of iterations; the first few iterations use an invariant encoding, while the remainder use random re-coding. This restarted L-BFGS algorithm balances the computational efficiency of shot-encoding, the convergence stability of the L-BFGS algorithm, and the inversion quality characteristic of random encoding in FWI.

  17. Electron kinetic effects on interferometry, polarimetry and Thomson scattering measurements in burning plasmas (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirnov, V. V.; Hartog, D. J. Den; Duff, J.

    2014-11-15

    At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = T{sub e}/m{sub e}c{sup 2} model may be insufficient; we present a more precise model with τ{sup 2}-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused bymore » equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of T{sub e} measurement relevant to ITER operational scenarios.« less

  18. Modeling and simulation of a beam emission spectroscopy diagnostic for the ITER prototype neutral beam injector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbisan, M., E-mail: marco.barbisan@igi.cnr.it; Zaniol, B.; Pasqualotto, R.

    2014-11-15

    A test facility for the development of the neutral beam injection system for ITER is under construction at Consorzio RFX. It will host two experiments: SPIDER, a 100 keV H{sup −}/D{sup −} ion RF source, and MITICA, a prototype of the full performance ITER injector (1 MV, 17 MW beam). A set of diagnostics will monitor the operation and allow to optimize the performance of the two prototypes. In particular, beam emission spectroscopy will measure the uniformity and the divergence of the fast particles beam exiting the ion source and travelling through the beam line components. This type of measurementmore » is based on the collection of the H{sub α}/D{sub α} emission resulting from the interaction of the energetic particles with the background gas. A numerical model has been developed to simulate the spectrum of the collected emissions in order to design this diagnostic and to study its performance. The paper describes the model at the base of the simulations and presents the modeled H{sub α} spectra in the case of MITICA experiment.« less

  19. Software Testing for Evolutionary Iterative Rapid Prototyping

    DTIC Science & Technology

    1990-12-01

    kept later hours than I did. Amidst the hustle and bustle, their prayers and help around the house were a great ast.. Finally, if anything shows the...possible meanings. A basic dictionary definition describes prototyping as "an original type , form, or instance that serves as a modfe] on which later...on program size. Asset instruments 49 the subject procedure and produces a graph of the structure for the type of data flow testing conducted. It

  20. Dual circuit embossed sheet heat transfer panel

    DOEpatents

    Morgan, G.D.

    1984-02-21

    A heat transfer panel provides redundant cooling for fusion reactors or the like environment requiring low-mass construction. Redundant cooling is provided by two independent cooling circuits, each circuit consisting of a series of channels joined to inlet and outlet headers. The panel comprises a welded joinder of two full-size and two much smaller partial-size sheets. The first full-size sheet is embossed to form first portions of channels for the first and second circuits, as well as a header for the first circuit. The second full-sized sheet is then laid over and welded to the first full-size sheet. The first and second partial-size sheets are then overlaid on separate portions of the second full-sized sheet, and are welded thereto. The first and second partial-sized sheets are embossed to form inlet and outlet headers, which communicate with channels of the second circuit through apertures formed in the second full-sized sheet. 6 figs.

  1. Dual-circuit embossed-sheet heat-transfer panel

    DOEpatents

    Morgan, G.D.

    1982-08-23

    A heat transfer panel provides redundant cooling for fusion reactors or the like environment requiring low-mass construction. Redundant cooling is provided by two independent cooling circuits, each circuit consisting of a series of channels joined to inlet and outlet headers. The panel comprises a welded joinder of two full-size and two much smaller partial-size sheets. The first full-size sheet is embossed for form first portions of channels for the first and second circuits, as well as a header for the first circuit. The second full-sized sheet is then laid over and welded to the first full-size sheet. The first and second partial-size sheets are then overlaid on separate portions of the second full-sized sheet, and are welded thereto. The first and second partial-sized sheets are embossed to form inlet and outlet headers, which communicate with channels of the second circuit through apertures formed in the second full-sized sheet.

  2. Dual circuit embossed sheet heat transfer panel

    DOEpatents

    Morgan, Grover D.

    1984-01-01

    A heat transfer panel provides redundant cooling for fusion reactors or the like environment requiring low-mass construction. Redundant cooling is provided by two independent cooling circuits, each circuit consisting of a series of channels joined to inlet and outlet headers. The panel comprises a welded joinder of two full-size and two much smaller partial-size sheets. The first full-size sheet is embossed to form first portions of channels for the first and second circuits, as well as a header for the first circuit. The second full-sized sheet is then laid over and welded to the first full-size sheet. The first and second partial-size sheets are then overlaid on separate portions of the second full-sized sheet, and are welded thereto. The first and second partial-sized sheets are embossed to form inlet and outlet headers, which communicate with channels of the second circuit through apertures formed in the second full-sized sheet.

  3. Calibration of ITER Instant Power Neutron Monitors: Recommended Scenario of Experiments at the Reactor

    NASA Astrophysics Data System (ADS)

    Borisov, A. A.; Deryabina, N. A.; Markovskij, D. V.

    2017-12-01

    Instant power is a key parameter of the ITER. Its monitoring with an accuracy of a few percent is an urgent and challenging aspect of neutron diagnostics. In a series of works published in Problems of Atomic Science and Technology, Series: Thermonuclear Fusion under a common title, the step-by-step neutronics analysis was given to substantiate a calibration technique for the DT and DD modes of the ITER. A Gauss quadrature scheme, optimal for processing "expensive" experiments, is used for numerical integration of 235U and 238U detector responses to the point sources of 14-MeV neutrons. This approach allows controlling the integration accuracy in relation to the number of coordinate mesh points and thus minimizing the number of irradiations at the given uncertainty of the full monitor response. In the previous works, responses of the divertor and blanket monitors to the isotropic point sources of DT and DD neutrons in the plasma profile and to the models of real sources were calculated within the ITER model using the MCNP code. The neutronics analyses have allowed formulating the basic principles of calibration that are optimal for having the maximum accuracy at the minimum duration of in situ experiments at the reactor. In this work, scenarios of the preliminary and basic experimental ITER runs are suggested on the basis of those principles. It is proposed to calibrate the monitors only with DT neutrons and use correction factors to the DT mode calibration for the DD mode. It is reasonable to perform full calibration only with 235U chambers and calibrate 238U chambers by responses of the 235U chambers during reactor operation (cross-calibration). The divertor monitor can be calibrated using both direct measurement of responses at the Gauss positions of a point source and simplified techniques based on the concepts of equivalent ring sources and inverse response distributions, which will considerably reduce the amount of measurements. It is shown that the monitor based on the average responses of the horizontal and vertical neutron chambers remains spatially stable as the source moves and can be used in addition to the staff monitor at neutron fluxes in the detectors four orders of magnitude lower than on the first wall, where staff detectors are located. Owing to low background, detectors of neutron chambers do not need calibration in the reactor because it is actually determination of the absolute detector efficiency for 14-MeV neutrons, which is a routine out-of-reactor procedure.

  4. Construction, classification and parametrization of complex Hadamard matrices

    NASA Astrophysics Data System (ADS)

    Szöllősi, Ferenc

    To improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved. Computing such fluxes efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration for eigenvalue problems, which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of cutting-edge computers. The first is a multigroup Krylov solver that converges more quickly than Gauss Seidel and parallelizes the code in energy such that Denovo can use hundreds of thousand of cores effectively. The second is Rayleigh quotient iteration (RQI), an old method applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way and should converge in fewer iterations than power iteration. RQI creates energy-block-dense equations that the new Krylov solver treats efficiently. However, RQI can have convergence problems because it creates poorly conditioned systems. This can be overcome with preconditioning. The third method is a multigrid-in-energy preconditioner. The preconditioner takes advantage of the new energy decomposition because the grids are in energy rather than space or angle. The preconditioner greatly reduces iteration count for many problem types and scales well in energy. It also allows RQI to be successful for problems it could not solve otherwise. The methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy preconditioner being particularly successful on their own. The largest benefit, though, comes from using these methods in concert.

  5. Statistical computation of tolerance limits

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1993-01-01

    Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.

  6. Optimized random phase only holograms.

    PubMed

    Zea, Alejandro Velez; Barrera Ramirez, John Fredy; Torroba, Roberto

    2018-02-15

    We propose a simple and efficient technique capable of generating Fourier phase only holograms with a reconstruction quality similar to the results obtained with the Gerchberg-Saxton (G-S) algorithm. Our proposal is to use the traditional G-S algorithm to optimize a random phase pattern for the resolution, pixel size, and target size of the general optical system without any specific amplitude data. This produces an optimized random phase (ORAP), which is used for fast generation of phase only holograms of arbitrary amplitude targets. This ORAP needs to be generated only once for a given optical system, avoiding the need for costly iterative algorithms for each new target. We show numerical and experimental results confirming the validity of the proposal.

  7. Metallographic autopsies of full-scale ITER prototype cable-in-conduit conductors after full testing in SULTAN: 1. The mechanical role of copper strands in a CICC

    DOE PAGES

    Sanabria, Carlos; Lee, Peter J.; Starch, William; ...

    2015-06-22

    Cables made with Nb 3Sn-based superconductor strands will provide the 13 T maximum peak magnetic field of the ITER Central Solenoid (CS) coils and they must survive up to 60,000 electromagnetic cycles. Accordingly, prototype designs of CS cable-in-conduit-conductors (CICC) were electromagnetically tested over multiple magnetic field cycles and warm-up-cool-down scenarios in the SULTAN facility at CRPP. We report here a post mortem metallographic analysis of two CS CICC prototypes which exhibited some rate of irreversible performance degradation during cycling. The standard ITER CS CICC cable design uses a combination of superconducting and Cu strands, and because the Lorentz force onmore » the strand is proportional to the transport current in the strand, removing the copper strands (while increasing the Cu:SC ratio of the superconducting strands) was proposed as one way of reducing the strand load. In this study we compare the two alternative CICCs, with and without Cu strands, keeping in mind that the degradation after SULTAN test was lower for the CICC without Cu strands. The post mortem metallographic evaluation revealed that the overall strand transverse movement was 20% lower in the CICC without Cu strands and that the tensile filament fractures found were less, both indications of an overall reduction in high tensile strain regions. Furthermore, it was interesting to see that the Cu strands in the mixed cable design (with higher degradation) helped reduce the contact stresses on the high pressure side of the CICC, but in either case, the strain reduction mechanisms were not enough to suppress cyclic degradation. Advantages and disadvantages of each conductor design are discussed here aimed to understand the sources of the degradation.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, L; Han, Y; Jin, M

    Purpose: To develop an iterative reconstruction method for X-ray CT, in which the reconstruction can quickly converge to the desired solution with much reduced projection views. Methods: The reconstruction is formulated as a convex feasibility problem, i.e. the solution is an intersection of three convex sets: 1) data fidelity (DF) set – the L2 norm of the difference of observed projections and those from the reconstructed image is no greater than an error bound; 2) non-negativity of image voxels (NN) set; and 3) piecewise constant (PC) set - the total variation (TV) of the reconstructed image is no greater thanmore » an upper bound. The solution can be found by applying projection onto convex sets (POCS) sequentially for these three convex sets. Specifically, the algebraic reconstruction technique and setting negative voxels as zero are used for projection onto the DF and NN sets, respectively, while the projection onto the PC set is achieved by solving a standard Rudin, Osher, and Fatemi (ROF) model. The proposed method is named as full sequential POCS (FS-POCS), which is tested using the Shepp-Logan phantom and the Catphan600 phantom and compared with two similar algorithms, TV-POCS and CP-TV. Results: Using the Shepp-Logan phantom, the root mean square error (RMSE) of reconstructed images changing along with the number of iterations is used as the convergence measurement. In general, FS- POCS converges faster than TV-POCS and CP-TV, especially with fewer projection views. FS-POCS can also achieve accurate reconstruction of cone-beam CT of the Catphan600 phantom using only 54 views, comparable to that of FDK using 364 views. Conclusion: We developed an efficient iterative reconstruction for sparse-view CT using full sequential POCS. The simulation and physical phantom data demonstrated the computational efficiency and effectiveness of FS-POCS.« less

  9. An Interactive Iterative Method for Electronic Searching of Large Literature Databases

    ERIC Educational Resources Information Center

    Hernandez, Marco A.

    2013-01-01

    PubMed® is an on-line literature database hosted by the U.S. National Library of Medicine. Containing over 21 million citations for biomedical literature--both abstracts and full text--in the areas of the life sciences, behavioral studies, chemistry, and bioengineering, PubMed® represents an important tool for researchers. PubMed® searches return…

  10. Virtual Worlds; Real Learning: Design Principles for Engaging Immersive Environments

    NASA Technical Reports Server (NTRS)

    Wu (u. Sjarpm)

    2012-01-01

    The EMDT master's program at Full Sail University embarked on a small project to use a virtual environment to teach graduate students. The property used for this project has evolved our several iterations and has yielded some basic design principles and pedagogy for virtual spaces. As a result, students are emerging from the program with a better grasp of future possibilities.

  11. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    NASA Astrophysics Data System (ADS)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  12. Conceptual design of the tangentially viewing combined interferometer-polarimeter for ITER density measurements.

    PubMed

    Van Zeeland, M A; Boivin, R L; Brower, D L; Carlstrom, T N; Chavez, J A; Ding, W X; Feder, R; Johnson, D; Lin, L; O'Neill, R C; Watts, C

    2013-04-01

    One of the systems planned for the measurement of electron density in ITER is a multi-channel tangentially viewing combined interferometer-polarimeter (TIP). This work discusses the current status of the design, including a preliminary optical table layout, calibration options, error sources, and performance projections based on a CO2/CO laser system. In the current design, two-color interferometry is carried out at 10.59 μm and 5.42 μm and a separate polarimetry measurement of the plasma induced Faraday effect, utilizing the rotating wave technique, is made at 10.59 μm. The inclusion of polarimetry provides an independent measure of the electron density and can also be used to correct the conventional two-color interferometer for fringe skips at all densities, up to and beyond the Greenwald limit. The system features five chords with independent first mirrors to reduce risks associated with deposition, erosion, etc., and a common first wall hole to minimize penetration sizes. Simulations of performance for a projected ITER baseline discharge show the diagnostic will function as well as, or better than, comparable existing systems for feedback density control. Calculations also show that finite temperature effects will be significant in ITER even for moderate temperature plasmas and can lead to a significant underestimate of electron density. A secondary role TIP will fulfill is that of a density fluctuation diagnostic; using a toroidal Alfvén eigenmode as an example, simulations show TIP will be extremely robust in this capacity and potentially able to resolve coherent mode fluctuations with perturbed densities as low as δn∕n ≈ 10(-5).

  13. LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS*

    PubMed Central

    Meng, Xiangrui; Saunders, Michael A.; Mahoney, Michael W.

    2014-01-01

    We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to minx∈ℝn ‖Ax − b‖2, where A ∈ ℝm × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster. PMID:25419094

  14. Discrete-Time Local Value Iteration Adaptive Dynamic Programming: Admissibility and Termination Analysis.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao

    In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.

  15. 75 FR 43107 - Full-Size and Non-Full Size Baby Cribs: Withdrawal of Advance Notice of Proposed Rulemaking

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-23

    ... worked with the voluntary standards group, ASTM International (formerly known as the American Society for Testing and Materials), which added provisions in its standard for full-size baby cribs, ASTM F 1169, to... the same as voluntary standards ASTM F 1169-10, Standard Consumer Safety Specification for Full-Size...

  16. 75 FR 43107 - Revocation of Requirements for Full-Size Baby Cribs and Non-Full-Size Baby Cribs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-23

    ... Improvement Act of 2008 (``CPSIA'') requires the United States Consumer Product Safety Commission (``CPSC'' or... CONSUMER PRODUCT SAFETY COMMISSION 16 CFR Parts 1508 and 1509 [CPSC Docket No. CPSC-2010-0075] Revocation of Requirements for Full-Size Baby Cribs and Non-Full- Size Baby Cribs AGENCY: Consumer Product...

  17. Adaptive Channel Measurement Study

    DTIC Science & Technology

    1975-09-01

    of P 3 as a Function of Step Size and Iteration Number With and Without Noise Using the LMS Algorithm and a Quadratic Model at a -Fade...real, al(t) will vanish, and the linear term 0,(t) is a filtered version of the input signal with a filter identical to the lowpass equivalent of the...we see tnat a (t) +ij(t) -n Il+ ’n] - - + ..- (2.71) 2-49 Collecting terms of the same order 0(t) + JO(t) ,,

  18. ’In situ’ Measurement of the Ratio of Aerosol Absorption to Extinction Coefficient.

    DTIC Science & Technology

    1980-08-01

    procedure for settling measurements was to obtain a reference (presmoke) level of stabilized power on both of the calorimeters indicated in figure 1...sizing measurements which might be appropriate and accurate for this application as also being investigated. 16 REFERENCES 1. Selby, J. E. A., and L...Projectiles," ECOM-5570, August 1975. 7. Duncan, Louis D., "An Improved Algorithm for the Iterated Minimal Information Solution for Remote Sounding of

  19. Development of 3D Oxide Fuel Mechanics Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, B. W.; Casagranda, A.; Pitts, S. A.

    This report documents recent work to improve the accuracy and robustness of the mechanical constitutive models used in the BISON fuel performance code. These developments include migration of the fuel mechanics models to be based on the MOOSE Tensor Mechanics module, improving the robustness of the smeared cracking model, implementing a capability to limit the time step size based on material model response, and improving the robustness of the return mapping iterations used in creep and plasticity models.

  20. Developing a quit smoking website that is usable by people with severe mental illnesses.

    PubMed

    Ferron, Joelle C; Brunette, Mary F; McHugo, Gregory J; Devitt, Timothy S; Martin, Wendy M; Drake, Robert E

    2011-01-01

    Evidence-based treatments may be delivered in computerized, web-based formats. This strategy can deliver the intervention consistently with minimal treatment provider time and cost. However, standard web sites may not be usable by people with severe mental illnesses who may experience cognitive deficits and low computer experience. This manuscript reports on the iterative development and usability testing of a website designed to educate and motivate adults with severe mental illnesses to engage in smoking cessation activities. Three phases of semi-structured interviews were performed with participants after they used the program and combined with information from screen-recorded usability data. T-tests compared the differences between uses of the first computer program version and a later version. Iteratively conducted usability tests demonstrated an increased ease of use from the first to the last version of the website through significant improvement in the percentage of unproductive clicking along with fewer questions asked about how to use the program. The improvement in use of the website resulted from changes such as: integrating a mouse tutorial, increasing font sizes, and increasing button sizes. The website usability recommendations provide some guidelines for interventionists developing web tools for people who experience serious psychiatric disabilities. In general, insights from the study highlight the need for thoughtful design and usability testing when creating a website for people with severe mental illness.

  1. Accumulate-Repeat-Accumulate-Accumulate-Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy

    2004-01-01

    Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.

  2. Intragranular cellular segregation network structure strengthening 316L stainless steel prepared by selective laser melting

    NASA Astrophysics Data System (ADS)

    Zhong, Yuan; Liu, Leifeng; Wikman, Stefan; Cui, Daqing; Shen, Zhijian

    2016-03-01

    A feasibility study was performed to fabricate ITER In-Vessel components by Selective Laser Melting (SLM) supported by Fusion for Energy (F4E). Almost fully dense 316L stainless steel (SS316L) components were prepared from gas-atomized powder and with optimized SLM processing parameters. Tensile tests and Charpy-V tests were carried out at 22 °C and 250 °C and the results showed that SLM SS316L fulfill the RCC-MR code. Microstructure characterization reveals the presence of hierarchical macro-, micro- and nano-structures in as-built samples that were very different from SS316L microstructures prepared by other established methods. The formation of a characteristic intragranular cellular segregation network microstructure appears to contribute to the increase of yield strength without losing ductility. Silicon oxide nano-inclusions were formed during the SLM process that generated a micro-hardness fluctuation in the building direction. The combined influence of a cellular microstructure and the nano-inclusions constraints the size of ductile dimples to nano-scale. The crack propagation is hindered by a pinning effect that improves the defect-tolerance of the SLM SS316L. This work proves that it was possible to manufacture SS316L with properties suitable for ITER First Wall panels. Further studies on irradiation properties of SLM SS316L and manufacturing of larger real-size components are needed.

  3. Evaluation of response variables in computer-simulated virtual cataract surgery

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Laurell, Carl-Gustaf; Simawi, Wamidh; Nordqvist, Per; Skarman, Eva; Nordh, Leif

    2006-02-01

    We have developed a virtual reality (VR) simulator for phacoemulsification (phaco) surgery. The current work aimed at evaluating the precision in the estimation of response variables identified for measurement of the performance of VR phaco surgery. We identified 31 response variables measuring; the overall procedure, the foot pedal technique, the phacoemulsification technique, erroneous manipulation, and damage to ocular structures. Totally, 8 medical or optometry students with a good knowledge of ocular anatomy and physiology but naive to cataract surgery performed three sessions each of VR Phaco surgery. For measurement, the surgical procedure was divided into a sculpting phase and an evacuation phase. The 31 response variables were measured for each phase in all three sessions. The variance components for individuals and iterations of sessions within individuals were estimated with an analysis of variance assuming a hierarchal model. The consequences of estimated variabilities for sample size requirements were determined. It was found that generally there was more variability for iterated sessions within individuals for measurements of the sculpting phase than for measurements of the evacuation phase. This resulted in larger required sample sizes for detection of difference between independent groups or change within group, for the sculpting phase as compared to for the evacuation phase. It is concluded that several of the identified response variables can be measured with sufficient precision for evaluation of VR phaco surgery.

  4. ClustENM: ENM-Based Sampling of Essential Conformational Space at Full Atomic Resolution

    PubMed Central

    Kurkcuoglu, Zeynep; Bahar, Ivet; Doruker, Pemra

    2016-01-01

    Accurate sampling of conformational space and, in particular, the transitions between functional substates has been a challenge in molecular dynamic (MD) simulations of large biomolecular systems. We developed an Elastic Network Model (ENM)-based computational method, ClustENM, for sampling large conformational changes of biomolecules with various sizes and oligomerization states. ClustENM is an iterative method that combines ENM with energy minimization and clustering steps. It is an unbiased technique, which requires only an initial structure as input, and no information about the target conformation. To test the performance of ClustENM, we applied it to six biomolecular systems: adenylate kinase (AK), calmodulin, p38 MAP kinase, HIV-1 reverse transcriptase (RT), triosephosphate isomerase (TIM), and the 70S ribosomal complex. The generated ensembles of conformers determined at atomic resolution show good agreement with experimental data (979 structures resolved by X-ray and/or NMR) and encompass the subspaces covered in independent MD simulations for TIM, p38, and RT. ClustENM emerges as a computationally efficient tool for characterizing the conformational space of large systems at atomic detail, in addition to generating a representative ensemble of conformers that can be advantageously used in simulating substrate/ligand-binding events. PMID:27494296

  5. Multi-objective/loading optimization for rotating composite flexbeams

    NASA Technical Reports Server (NTRS)

    Hamilton, Brian K.; Peters, James R.

    1989-01-01

    With the evolution of advanced composites, the feasibility of designing bearingless rotor systems for high speed, demanding maneuver envelopes, and high aircraft gross weights has become a reality. These systems eliminate the need for hinges and heavily loaded bearings by incorporating a composite flexbeam structure which accommodates flapping, lead-lag, and feathering motions by bending and twisting while reacting full blade centrifugal force. The flight characteristics of a bearingless rotor system are largely dependent on hub design, and the principal element in this type of system is the composite flexbeam. As in any hub design, trade off studies must be performed in order to optimize performance, dynamics (stability), handling qualities, and stresses. However, since the flexbeam structure is the primary component which will determine the balance of these characteristics, its design and fabrication are not straightforward. It was concluded that: pitchcase and snubber damper representations are required in the flexbeam model for proper sizing resulting from dynamic requirements; optimization is necessary for flexbeam design, since it reduces the design iteration time and results in an improved design; and inclusion of multiple flight conditions and their corresponding fatigue allowables is necessary for the optimization procedure.

  6. Full dose reduction potential of statistical iterative reconstruction for head CT protocols in a predominantly pediatric population

    PubMed Central

    Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.

    2016-01-01

    Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425

  7. Parallel Implementation of 3-D Iterative Reconstruction With Intra-Thread Update for the jPET-D4

    NASA Astrophysics Data System (ADS)

    Lam, Chih Fung; Yamaya, Taiga; Obi, Takashi; Yoshida, Eiji; Inadama, Naoko; Shibuya, Kengo; Nishikido, Fumihiko; Murayama, Hideo

    2009-02-01

    One way to speed-up iterative image reconstruction is by parallel computing with a computer cluster. However, as the number of computing threads increases, parallel efficiency decreases due to network transfer delay. In this paper, we proposed a method to reduce data transfer between computing threads by introducing an intra-thread update. The update factor is collected from each slave thread and a global image is updated as usual in the first K sub-iteration. In the rest of the sub-iterations, the global image is only updated at an interval which is controlled by a parameter L. In between that interval, the intra-thread update is carried out whereby an image update is performed in each slave thread locally. We investigated combinations of K and L parameters based on parallel implementation of RAMLA for the jPET-D4 scanner. Our evaluation used four workstations with a total of 16 slave threads. Each slave thread calculated a different set of LORs which are divided according to ring difference numbers. We assessed image quality of the proposed method with a hotspot simulation phantom. The figure of merit was the full-width-half-maximum of hotspots and the background normalized standard deviation. At an optimum K and L setting, we did not find significant change in the output images. We also applied the proposed method to a Hoffman phantom experiment and found the difference due to intra-thread update was negligible. With the intra-thread update, computation time could be reduced by about 23%.

  8. Fast l₁-SPIRiT compressed sensing parallel imaging MRI: scalable parallel implementation and clinically feasible runtime.

    PubMed

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-06-01

    We present l₁-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative self-consistent parallel imaging (SPIRiT). Like many iterative magnetic resonance imaging reconstructions, l₁-SPIRiT's image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing l₁-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of l₁-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT spoiled gradient echo (SPGR) sequence with up to 8× acceleration via Poisson-disc undersampling in the two phase-encoded directions.

  9. Bidirectional iterative parcellation of diffusion weighted imaging data: Separating cortical regions connected by the arcuate fasciculus and extreme capsule

    PubMed Central

    Patterson, Dianne K.; Van Petten, Cyma; Beeson, Pélagie M.; Rapcsak, Steven Z.; Plante, Elena

    2014-01-01

    This paper introduces a Bidirectional Iterative Parcellation (BIP) procedure designed to identify the location and size of connected cortical regions (parcellations) at both ends of a white matter tract in diffusion weighted images. The procedure applies the FSL option “probabilistic tracking with classification targets” in a bidirectional and iterative manner. To assess the utility of BIP, we applied the procedure to the problem of parcellating a limited set of well-established gray matter seed regions associated with the dorsal (arcuate fasciculus/superior longitudinal fasciculus) and ventral (extreme capsule fiber system) white matter tracts in the language networks of 97 participants. These left hemisphere seed regions and the two white matter tracts, along with their right hemisphere homologues, provided an excellent test case for BIP because the resulting parcellations overlap and their connectivity via the arcuate fasciculi and extreme capsule fiber systems are well studied. The procedure yielded both confirmatory and novel findings. Specifically, BIP confirmed that each tract connects within the seed regions in unique, but expected ways. Novel findings included increasingly left-lateralized parcellations associated with the arcuate fasciculus/superior longitudinal fasciculus as a function of age and education. These results demonstrate that BIP is an easily implemented technique that successfully confirmed cortical connectivity patterns predicted in the literature, and has the potential to provide new insights regarding the architecture of the brain. PMID:25173414

  10. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    PubMed Central

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  11. Multidimensional radiative transfer with multilevel atoms. II. The non-linear multigrid method.

    NASA Astrophysics Data System (ADS)

    Fabiani Bendicho, P.; Trujillo Bueno, J.; Auer, L.

    1997-08-01

    A new iterative method for solving non-LTE multilevel radiative transfer (RT) problems in 1D, 2D or 3D geometries is presented. The scheme obtains the self-consistent solution of the kinetic and RT equations at the cost of only a few (<10) formal solutions of the RT equation. It combines, for the first time, non-linear multigrid iteration (Brandt, 1977, Math. Comp. 31, 333; Hackbush, 1985, Multi-Grid Methods and Applications, springer-Verlag, Berlin), an efficient multilevel RT scheme based on Gauss-Seidel iterations (cf. Trujillo Bueno & Fabiani Bendicho, 1995ApJ...455..646T), and accurate short-characteristics formal solution techniques. By combining a valid stopping criterion with a nested-grid strategy a converged solution with the desired true error is automatically guaranteed. Contrary to the current operator splitting methods the very high convergence speed of the new RT method does not deteriorate when the grid spatial resolution is increased. With this non-linear multigrid method non-LTE problems discretized on N grid points are solved in O(N) operations. The nested multigrid RT method presented here is, thus, particularly attractive in complicated multilevel transfer problems where small grid-sizes are required. The properties of the method are analyzed both analytically and with illustrative multilevel calculations for Ca II in 1D and 2D schematic model atmospheres.

  12. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  13. Contribution of ASDEX Upgrade to disruption studies for ITER

    NASA Astrophysics Data System (ADS)

    Pautasso, G.; Zhang, Y.; Reiter, B.; Giannone, L.; Gruber, O.; Herrmann, A.; Kardaun, O.; Khayrutdinov, K. K.; Lukash, V. E.; Maraschek, M.; Mlynek, A.; Nakamura, Y.; Schneider, W.; Sias, G.; Sugihara, M.; ASDEX Upgrade Team

    2011-10-01

    This paper describes the most recent contributions of ASDEX Upgrade to ITER in the field of disruption studies. (1) The ITER specifications for the halo current magnitude are based on data collected from several tokamaks and summarized in the plot of the toroidal peaking factor versus the maximum halo current fraction. Even if the maximum halo current in ASDEX Upgrade reaches 50% of the plasma current, the duration of this maximum lasts a fraction of a ms. (2) Long-lasting asymmetries of the halo current are rare and do not give rise to a large asymmetric component of the mechanical forces on the machine. Differently from JET, these asymmetries are neither locked nor exhibit a stationary harmonic structure. (3) Recent work on disruption prediction has concentrated on the search for a simple function of the most relevant plasma parameters, which is able to discriminate between the safe and pre-disruption phases of a discharge. For this purpose, the disruptions of the last four years have been classified into groups and then discriminant analysis is used to select the most significant variables and to derive the discriminant function. (4) The attainment of the critical density for the collisional suppression of the runaway electrons seems to be technically and physically possible on our medium size tokamak. The CO2 interferometer and the AXUV diagnostic provide information on the highly 3D impurity transport process during the whole plasma quench.

  14. Improved event positioning in a gamma ray detector using an iterative position-weighted centre-of-gravity algorithm.

    PubMed

    Liu, Chen-Yi; Goertzen, Andrew L

    2013-07-21

    An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.

  15. 75 FR 81789 - Third Party Testing for Certain Children's Products; Full-Size Baby Cribs and Non-Full-Size Baby...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-28

    ... sufficient samples of the product, or samples that are identical in all material respects to the product. The... 1220, Safety Standards for Full-Size Baby Cribs and Non-Full- Size Baby Cribs. A true copy, in English... assessment bodies seeking accredited status must submit to the Commission copies, in English, of their...

  16. Domain Derivatives in Dielectric Rough Surface Scattering

    DTIC Science & Technology

    2015-01-01

    and require the gradient of the objective function in the unknown model parameter vector at each stage of iteration. For large N, finite...differencing becomes numerically intensive, and an efficient alternative is domain differentiation in which the full gradient is obtained by solving a single...derivative calculation of the gradient for a locally perturbed dielectric interface. The method is non-variational, and algebraic in nature in that it

  17. Path-oriented early reaction to approaching disruptions in ASDEX Upgrade and TCV in view of the future needs for ITER and DEMO

    NASA Astrophysics Data System (ADS)

    Maraschek, M.; Gude, A.; Igochine, V.; Zohm, H.; Alessi, E.; Bernert, M.; Cianfarani, C.; Coda, S.; Duval, B.; Esposito, B.; Fietz, S.; Fontana, M.; Galperti, C.; Giannone, L.; Goodman, T.; Granucci, G.; Marelli, L.; Novak, S.; Paccagnella, R.; Pautasso, G.; Piovesan, P.; Porte, L.; Potzel, S.; Rapson, C.; Reich, M.; Sauter, O.; Sheikh, U.; Sozzi, C.; Spizzo, G.; Stober, J.; Treutterer, W.; ZancaP; ASDEX Upgrade Team; TCV Team; the EUROfusion MST1 Team

    2018-01-01

    Routine reaction to approaching disruptions in tokamaks is currently largely limited to machine protection by mitigating an ongoing disruption, which remains a basic requirement for ITER and DEMO [1]. Nevertheless, a mitigated disruption still generates stress to the device. Additionally, in future fusion devices, high-performance discharge time itself will be very valuable. Instead of reacting only on generic features, occurring shortly before the disruption, the ultimate goal is to actively avoid approaching disruptions at an early stage, sustain the discharges whenever possible and restrict mitigated disruptions to major failures. Knowledge of the most relevant root causes and the corresponding chain of events leading to disruption, the disruption path, is a prerequisite. For each disruption path, physics-based sensors and adequate actuators must be defined and their limitations considered. Early reaction facilitates the efficiency of the actuators and enhances the probability of a full recovery. Thus, sensors that detect potential disruptions in time are to be identified. Once the entrance into a disruption path is detected, we propose a hierarchy of actions consisting of (I) recovery of the discharge to full performance or at least continuation with a less disruption-prone backup scenario, (II) complete avoidance of disruption to sustain the discharge or at least delay it for a controlled termination and, (III), only as last resort, a disruption mitigation. Based on the understanding of disruption paths, a hierarchical and path-specific handling strategy must be developed. Such schemes, testable in present devices, could serve as guidelines for ITER and DEMO operation. For some disruption paths, experiments have been performed at ASDEX Upgrade and TCV. Disruptions were provoked in TCV by impurity injection into ELMy H-mode discharges and in ASDEX Upgrade by forcing a density limit in H-mode discharges. The new approach proposed in this paper is discussed for these cases. For the H-mode density limit sensors used so far react too late. Thus a plasma-state boundary is proposed, that can serve as an adequate early sensor for avoiding density limit disruptions in H-modes and for recovery to full performance.

  18. Design optimization of large-size format edge-lit light guide units

    NASA Astrophysics Data System (ADS)

    Hastanin, J.; Lenaerts, C.; Fleury-Frenette, K.

    2016-04-01

    In this paper, we present an original method of dot pattern generation dedicated to large-size format light guide plate (LGP) design optimization, such as photo-bioreactors, the number of dots greatly exceeds the maximum allowable number of optical objects supported by most common ray-tracing software. In the proposed method, in order to simplify the computational problem, the original optical system is replaced by an equivalent one. Accordingly, an original dot pattern is splitted into multiple small sections, inside which the dot size variation is less than the ink dots printing typical resolution. Then, these sections are replaced by equivalent cells with continuous diffusing film. After that, we adjust the TIS (Total Integrated Scatter) two-dimensional distribution over the grid of equivalent cells, using an iterative optimization procedure. Finally, the obtained optimal TIS distribution is converted into the dot size distribution by applying an appropriate conversion rule. An original semi-empirical equation dedicated to rectangular large-size LGPs is proposed for the initial guess of TIS distribution. It allows significantly reduce the total time needed to dot pattern optimization.

  19. Gastric precancerous diseases classification using CNN with a concise model.

    PubMed

    Zhang, Xu; Hu, Weiling; Chen, Fei; Liu, Jiquan; Yang, Yuanhang; Wang, Liangjing; Duan, Huilong; Si, Jianmin

    2017-01-01

    Gastric precancerous diseases (GPD) may deteriorate into early gastric cancer if misdiagnosed, so it is important to help doctors recognize GPD accurately and quickly. In this paper, we realize the classification of 3-class GPD, namely, polyp, erosion, and ulcer using convolutional neural networks (CNN) with a concise model called the Gastric Precancerous Disease Network (GPDNet). GPDNet introduces fire modules from SqueezeNet to reduce the model size and parameters about 10 times while improving speed for quick classification. To maintain classification accuracy with fewer parameters, we propose an innovative method called iterative reinforced learning (IRL). After training GPDNet from scratch, we apply IRL to fine-tune the parameters whose values are close to 0, and then we take the modified model as a pretrained model for the next training. The result shows that IRL can improve the accuracy about 9% after 6 iterations. The final classification accuracy of our GPDNet was 88.90%, which is promising for clinical GPD recognition.

  20. An iterative glycosyltransferase EntS catalyzes transfer and extension of O- and S-linked monosaccharide in enterocin 96.

    PubMed

    Nagar, Rupa; Rao, Alka

    2017-05-12

    Glycosyltransferases are essential tools for in vitro-glycoengineering. Bacteria harbor an unexplored variety of protein glycosyltransferases. Here, we describe a peptide glycosyltransferase (EntS) encoded by ORF0417 of Enterococcus faecalis TX0104. EntS di-glycosylates linear peptide of enterocin 96- a known antibacterial, in vitro. It is capable of transferring as well as extending the glycan onto the peptide in an iterative sequential dissociative manner. It can catalyze multiple linkages: Glc/Gal(-O)Ser/Thr, Glc/Gal(-S)Cys and Glc/Gal(β)Glc/Gal(-O/S)Ser/Thr/Cys, in one pot. Using EntS generated glycovariants of enterocin 96 peptide, size and identity of the glycan are found to influence bioactivity of the peptide. The study identifies EntS as an enzyme worth pursuing, for in vitro peptide glycoengineering. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Solution of large nonlinear quasistatic structural mechanics problems on distributed-memory multiprocessor computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanford, M.

    1997-12-31

    Most commercially-available quasistatic finite element programs assemble element stiffnesses into a global stiffness matrix, then use a direct linear equation solver to obtain nodal displacements. However, for large problems (greater than a few hundred thousand degrees of freedom), the memory size and computation time required for this approach becomes prohibitive. Moreover, direct solution does not lend itself to the parallel processing needed for today`s multiprocessor systems. This talk gives an overview of the iterative solution strategy of JAS3D, the nonlinear large-deformation quasistatic finite element program. Because its architecture is derived from an explicit transient-dynamics code, it does not ever assemblemore » a global stiffness matrix. The author describes the approach he used to implement the solver on multiprocessor computers, and shows examples of problems run on hundreds of processors and more than a million degrees of freedom. Finally, he describes some of the work he is presently doing to address the challenges of iterative convergence for ill-conditioned problems.« less

  2. Construction of Protograph LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  3. Acoustical source reconstruction from non-synchronous sequential measurements by Fast Iterative Shrinkage Thresholding Algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang

    2017-11-01

    Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.

  4. Adaptive strategies for materials design using uncertainties

    DOE PAGES

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; ...

    2016-01-21

    Here, we compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material withmore » desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.« less

  5. Analysis of Online Composite Mirror Descent Algorithm.

    PubMed

    Lei, Yunwen; Zhou, Ding-Xuan

    2017-03-01

    We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.

  6. Coupling the snow thermodynamic model SNOWPACK with the microwave emission model of layered snowpacks for subarctic and arctic snow water equivalent retrievals

    NASA Astrophysics Data System (ADS)

    Langlois, A.; Royer, A.; Derksen, C.; Montpetit, B.; Dupont, F.; GoïTa, K.

    2012-12-01

    Satellite-passive microwave remote sensing has been extensively used to estimate snow water equivalent (SWE) in northern regions. Although passive microwave sensors operate independent of solar illumination and the lower frequencies are independent of atmospheric conditions, the coarse spatial resolution introduces uncertainties to SWE retrievals due to the surface heterogeneity within individual pixels. In this article, we investigate the coupling of a thermodynamic multilayered snow model with a passive microwave emission model. Results show that the snow model itself provides poor SWE simulations when compared to field measurements from two major field campaigns. Coupling the snow and microwave emission models with successive iterations to correct the influence of snow grain size and density significantly improves SWE simulations. This method was further validated using an additional independent data set, which also showed significant improvement using the two-step iteration method compared to standalone simulations with the snow model.

  7. GPU computing with Kaczmarz’s and other iterative algorithms for linear systems

    PubMed Central

    Elble, Joseph M.; Sahinidis, Nikolaos V.; Vouzis, Panagiotis

    2009-01-01

    The graphics processing unit (GPU) is used to solve large linear systems derived from partial differential equations. The differential equations studied are strongly convection-dominated, of various sizes, and common to many fields, including computational fluid dynamics, heat transfer, and structural mechanics. The paper presents comparisons between GPU and CPU implementations of several well-known iterative methods, including Kaczmarz’s, Cimmino’s, component averaging, conjugate gradient normal residual (CGNR), symmetric successive overrelaxation-preconditioned conjugate gradient, and conjugate-gradient-accelerated component-averaged row projections (CARP-CG). Computations are preformed with dense as well as general banded systems. The results demonstrate that our GPU implementation outperforms CPU implementations of these algorithms, as well as previously studied parallel implementations on Linux clusters and shared memory systems. While the CGNR method had begun to fall out of favor for solving such problems, for the problems studied in this paper, the CGNR method implemented on the GPU performed better than the other methods, including a cluster implementation of the CARP-CG method. PMID:20526446

  8. Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods

    PubMed Central

    Smith, David S.; Gore, John C.; Yankeelov, Thomas E.; Welch, E. Brian

    2012-01-01

    Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 40962 or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 10242 and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images. PMID:22481908

  9. Adaptive strategies for materials design using uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James

    Here, we compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material withmore » desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.« less

  10. Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods.

    PubMed

    Smith, David S; Gore, John C; Yankeelov, Thomas E; Welch, E Brian

    2012-01-01

    Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 4096(2) or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 1024(2) and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images.

  11. An iterative method for tri-level quadratic fractional programming problems using fuzzy goal programming approach

    NASA Astrophysics Data System (ADS)

    Kassa, Semu Mitiku; Tsegay, Teklay Hailay

    2017-08-01

    Tri-level optimization problems are optimization problems with three nested hierarchical structures, where in most cases conflicting objectives are set at each level of hierarchy. Such problems are common in management, engineering designs and in decision making situations in general, and are known to be strongly NP-hard. Existing solution methods lack universality in solving these types of problems. In this paper, we investigate a tri-level programming problem with quadratic fractional objective functions at each of the three levels. A solution algorithm has been proposed by applying fuzzy goal programming approach and by reformulating the fractional constraints to equivalent but non-fractional non-linear constraints. Based on the transformed formulation, an iterative procedure is developed that can yield a satisfactory solution to the tri-level problem. The numerical results on various illustrative examples demonstrated that the proposed algorithm is very much promising and it can also be used to solve larger-sized as well as n-level problems of similar structure.

  12. A calibrated iterative reconstruction for quantitative photoacoustic tomography using multi-angle light-sheet illuminations

    NASA Astrophysics Data System (ADS)

    Wang, Yihan; Lu, Tong; Zhang, Songhe; Song, Shaoze; Wang, Bingyuan; Li, Jiao; Zhao, Huijuan; Gao, Feng

    2018-02-01

    Quantitative photoacoustic tomography (q-PAT) is a nontrivial technique can be used to reconstruct the absorption image with a high spatial resolution. Several attempts have been investigated by setting point sources or fixed-angle illuminations. However, in practical applications, these schemes normally suffer from low signal-to-noise ratio (SNR) or poor quantification especially for large-size domains, due to the limitation of the ANSI-safety incidence and incompleteness in the data acquisition. We herein present a q-PAT implementation that uses multi-angle light-sheet illuminations and a calibrated iterative multi-angle reconstruction. The approach can acquire more complete information on the intrinsic absorption and SNR-boosted photoacoustic signals at selected planes from the multi-angle wide-field excitations of light-sheet. Therefore, the sliced absorption maps over whole body can be recovered in a measurementflexible, noise-robust and computation-economic way. The proposed approach is validated by the phantom experiment, exhibiting promising performances in image fidelity and quantitative accuracy.

  13. A survey of packages for large linear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Milne, Brent

    2000-02-11

    This paper evaluates portable software packages for the iterative solution of very large sparse linear systems on parallel architectures. While we cannot hope to tell individual users which package will best suit their needs, we do hope that our systematic evaluation provides essential unbiased information about the packages and the evaluation process may serve as an example on how to evaluate these packages. The information contained here include feature comparisons, usability evaluations and performance characterizations. This review is primarily focused on self-contained packages that can be easily integrated into an existing program and are capable of computing solutions to verymore » large sparse linear systems of equations. More specifically, it concentrates on portable parallel linear system solution packages that provide iterative solution schemes and related preconditioning schemes because iterative methods are more frequently used than competing schemes such as direct methods. The eight packages evaluated are: Aztec, BlockSolve,ISIS++, LINSOL, P-SPARSLIB, PARASOL, PETSc, and PINEAPL. Among the eight portable parallel iterative linear system solvers reviewed, we recommend PETSc and Aztec for most application programmers because they have well designed user interface, extensive documentation and very responsive user support. Both PETSc and Aztec are written in the C language and are callable from Fortran. For those users interested in using Fortran 90, PARASOL is a good alternative. ISIS++is a good alternative for those who prefer the C++ language. Both PARASOL and ISIS++ are relatively new and are continuously evolving. Thus their user interface may change. In general, those packages written in Fortran 77 are more cumbersome to use because the user may need to directly deal with a number of arrays of varying sizes. Languages like C++ and Fortran 90 offer more convenient data encapsulation mechanisms which make it easier to implement a clean and intuitive user interface. In addition to reviewing these portable parallel iterative solver packages, we also provide a more cursory assessment of a range of related packages, from specialized parallel preconditioners to direct methods for sparse linear systems.« less

  14. FENDL: International reference nuclear data library for fusion applications

    NASA Astrophysics Data System (ADS)

    Pashchenko, A. B.; Wienke, H.; Ganesan, S.

    1996-10-01

    The IAEA Nuclear Data Section, in co-operation with several national nuclear data centres and research groups, has created the first version of an internationally available Fusion Evaluated Nuclear Data Library (FENDL-1). The FENDL library has been selected to serve as a comprehensive source of processed and tested nuclear data tailored to the requirements of the engineering design activity (EDA) of the ITER project and other fusion-related development projects. The present version of FENDL consists of the following sublibraries covering the necessary nuclear input for all physics and engineering aspects of the material development, design, operation and safety of the ITER project in its current EDA phase: FENDL/A-1.1: neutron activation cross-sections, selected from different available sources, for 636 nuclides, FENDL/D-1.0: nuclear decay data for 2900 nuclides in ENDF-6 format, FENDL/DS-1.0: neutron activation data for dosimetry by foil activation, FENDL/C-1.0: data for the fusion reactions D(d,n), D(d,p), T(d,n), T(t,2n), He-3(d,p) extracted from ENDF/B-6 and processed, FENDL/E-1.0:data for coupled neutron—photon transport calculations, including a data library for neutron interaction and photon production for 63 elements or isotopes, selected from ENDF/B-6, JENDL-3, or BROND-2, and a photon—atom interaction data library for 34 elements. The benchmark validation of FENDL-1 as required by the customer, i.e. the ITER team, is considered to be a task of high priority in the coming months. The well tested and validated nuclear data libraries in processed form of the FENDL-2 are expected to be ready by mid 1996 for use by the ITER team in the final phase of ITER EDA after extensive benchmarking and integral validation studies in the 1995-1996 period. The FENDL data files can be electronically transferred to users from the IAEA nuclear data section online system through INTERNET. A grand total of 54 (sub)directories with 845 files with total size of about 2 million blocks or about 1 Gigabyte (1 block = 512 bytes) of numerical data is currently available on-line.

  15. SU-E-P-49: Evaluation of Image Quality and Radiation Dose of Various Unenhanced Head CT Protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, L; Khan, M; Alapati, K

    2015-06-15

    Purpose: To evaluate the diagnostic value of various unenhanced head CT protocols and predicate acceptable radiation dose level for head CT exam. Methods: Our retrospective analysis included 3 groups, 20 patients per group, who underwent clinical routine unenhanced adult head CT examination. All exams were performed axially with 120 kVp. Three protocols, 380 mAs without iterative reconstruction and automAs, 340 mAs with iterative reconstruction without automAs, 340 mAs with iterative reconstruction and automAs, were applied on each group patients respectively. The images were reconstructed with H30, J30 for brain window and H60, J70 for bone window. Images acquired with threemore » protocols were randomized and blindly reviewed by three radiologists. A 5 point scale was used to rate each exam The percentage of exam score above 3 and average scores of each protocol were calculated for each reviewer and tissue types. Results: For protocols without automAs, the average scores of bone window with iterative reconstruction were higher than those without iterative reconstruction for each reviewer although the radiation dose was 10 percentage lower. 100 percentage exams were scored 3 or higher and the average scores were above 4 for both brain and bone reconstructions. The CTDIvols are 64.4 and 57.8 mGy of 380 and 340 mAs, respectively. With automAs, the radiation dose varied with head size, resulting in 47.5 mGy average CTDIvol between 39.5 and 56.5 mGy. 93 and 98 percentage exams were scored great than 3 for brain and bone windows, respectively. The diagnostic confidence level and image quality of exams with AutomAs were less than those without AutomAs for each reviewer. Conclusion: According to these results, the mAs was reduced to 300 with automAs OFF for head CT exam. The radiation dose was 20 percentage lower than the original protocol and the CTDIvol was reduced to 51.2 mGy.« less

  16. A VLSI implementation of DCT using pass transistor technology

    NASA Technical Reports Server (NTRS)

    Kamath, S.; Lynn, Douglas; Whitaker, Sterling

    1992-01-01

    A VLSI design for performing the Discrete Cosine Transform (DCT) operation on image blocks of size 16 x 16 in a real time fashion operating at 34 MHz (worst case) is presented. The process used was Hewlett-Packard's CMOS26--A 3 metal CMOS process with a minimum feature size of 0.75 micron. The design is based on Multiply-Accumulate (MAC) cells which make use of a modified Booth recoding algorithm for performing multiplication. The design of these cells is straight forward, and the layouts are regular with no complex routing. Two versions of these MAC cells were designed and their layouts completed. Both versions were simulated using SPICE to estimate their performance. One version is slightly faster at the cost of larger silicon area and higher power consumption. An improvement in speed of almost 20 percent is achieved after several iterations of simulation and re-sizing.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Dingjie; Xie, Yi Min; Huang, Xiaodong

    Analytical studies on the size effects of a simply-shaped beam fixed at both ends have successfully explained the sudden changes of effective Young's modulus as its diameter decreases below 100 nm. Yet they are invalid for complex nanostructures ubiquitously existing in nature. In accordance with a generalized Young-Laplace equation, one of the representative size effects is transferred to non-uniformly distributed pressure against an external surface due to the imbalance of inward and outward loads. Because the magnitude of pressure depends on the principal curvatures, iterative steps have to be adopted to gradually stabilize the structure in finite element analysis. Computational resultsmore » are in good agreement with both experiment data and theoretical prediction. Furthermore, the investigation on strengthened and softened Young's modulus for two complex nanostructures demonstrates that the proposed computational method provides a general and effective approach to analyze the size effects for nanostructures in arbitrary shape.« less

  18. Modified reactive tabu search for the symmetric traveling salesman problems

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Hong, Pei-Yee; Ramli, Razamin; Khalid, Ruzelan

    2013-09-01

    Reactive tabu search (RTS) is an improved method of tabu search (TS) and it dynamically adjusts tabu list size based on how the search is performed. RTS can avoid disadvantage of TS which is in the parameter tuning in tabu list size. In this paper, we proposed a modified RTS approach for solving symmetric traveling salesman problems (TSP). The tabu list size of the proposed algorithm depends on the number of iterations when the solutions do not override the aspiration level to achieve a good balance between diversification and intensification. The proposed algorithm was tested on seven chosen benchmarked problems of symmetric TSP. The performance of the proposed algorithm is compared with that of the TS by using empirical testing, benchmark solution and simple probabilistic analysis in order to validate the quality of solution. The computational results and comparisons show that the proposed algorithm provides a better quality solution than that of the TS.

  19. Control x-ray deformable mirrors with few measurements

    NASA Astrophysics Data System (ADS)

    Huang, Lei; Xue, Junpeng; Idir, Mourad

    2016-09-01

    After years of development from a concept to early experimental stage, X-ray Deformable Mirrors (XDMs) are used in many synchrotron/free-electron laser facilities as a standard x-ray optics tool. XDM is becoming an integral part of the present and future large x-ray and EUV projects and will be essential in exploiting the full potential of the new sources currently under construction. The main objective of using XDMs is to correct wavefront errors or to enable variable focus beam sizes at the sample. Due to the coupling among the N actuators of a DM, it is usually necessary to perform a calibration or training process to drive the DM into the target shape. Commonly, in order to optimize the actuators settings to minimize slope/height errors, an initial measurement need to be collected, with all actuators set to 0, and then either N or 2N measurements are necessary learn each actuator behavior sequentially. In total, it means that N+1 or 2N+1 scans are required to perform this learning process. When the actuators number N is important and the actuator response or the necessary metrology is slow then this learning process can be time consuming. In this work, we present a fast and accurate method to drive an x-ray active bimorph mirror to a target shape with only 3 or 4 measurements. Instead of sequentially measuring and calculating the influence functions of all actuators and then predicting the voltages needed for any desired shape, the metrology data are directly used to "guide" the mirror from its current status towards the particular target slope/height via iterative compensations. The feedback for the iteration process is the discrepancy in curvature calculated by using B-spline fitting of the measured height/slope data. In this paper, the feasibility of this simple and effective approach is demonstrated with experiments.

  20. Self-organised criticality in the evolution of a thermodynamic model of rodent thermoregulatory huddling

    PubMed Central

    2017-01-01

    A thermodynamic model of thermoregulatory huddling interactions between endotherms is developed. The model is presented as a Monte Carlo algorithm in which animals are iteratively exchanged between groups, with a probability of exchanging groups defined in terms of the temperature of the environment and the body temperatures of the animals. The temperature-dependent exchange of animals between groups is shown to reproduce a second-order critical phase transition, i.e., a smooth switch to huddling when the environment gets colder, as measured in recent experiments. A peak in the rate at which group sizes change, referred to as pup flow, is predicted at the critical temperature of the phase transition, consistent with a thermodynamic description of huddling, and with a description of the huddle as a self-organising system. The model was subjected to a simple evolutionary procedure, by iteratively substituting the physiologies of individuals that fail to balance the costs of thermoregulation (by huddling in groups) with the costs of thermogenesis (by contributing heat). The resulting tension between cooperative and competitive interactions was found to generate a phenomenon called self-organised criticality, as evidenced by the emergence of avalanches in fitness that propagate across many generations. The emergence of avalanches reveals how huddling can introduce correlations in fitness between individuals and thereby constrain evolutionary dynamics. Finally, a full agent-based model of huddling interactions is also shown to generate criticality when subjected to the same evolutionary pressures. The agent-based model is related to the Monte Carlo model in the way that a Vicsek model is related to an Ising model in statistical physics. Huddling therefore presents an opportunity to use thermodynamic theory to study an emergent adaptive animal behaviour. In more general terms, huddling is proposed as an ideal system for investigating the interaction between self-organisation and natural selection empirically. PMID:28141809

  1. An Iterated Global Mascon Solution with Focus on Land Ice Mass Evolution

    NASA Technical Reports Server (NTRS)

    Luthcke, S. B.; Sabaka, T.; Rowlands, D. D.; Lemoine, F. G.; Loomis, B. D.; Boy, J. P.

    2012-01-01

    Land ice mass evolution is determined from a new GRACE global mascon solution. The solution is estimated directly from the reduction of the inter-satellite K-band range rate observations taking into account the full noise covariance, and formally iterating the solution. The new solution increases signal recovery while reducing the GRACE KBRR observation residuals. The mascons are estimated with 10-day and 1-arc-degree equal area sampling, applying anisotropic constraints for enhanced temporal and spatial resolution of the recovered land ice signal. The details of the solution are presented including error and resolution analysis. An Ensemble Empirical Mode Decomposition (EEMD) adaptive filter is applied to the mascon solution time series to compute timing of balance seasons and annual mass balances. The details and causes of the spatial and temporal variability of the land ice regions studied are discussed.

  2. Analog Design for Digital Deployment of a Serious Leadership Game

    NASA Technical Reports Server (NTRS)

    Maxwell, Nicholas; Lang, Tristan; Herman, Jeffrey L.; Phares, Richard

    2012-01-01

    This paper presents the design, development, and user testing of a leadership development simulation. The authors share lessons learned from using a design process for a board game to allow for quick and inexpensive revision cycles during the development of a serious leadership development game. The goal of this leadership simulation is to accelerate the development of leadership capacity in high-potential mid-level managers (GS-15 level) in a federal government agency. Simulation design included a mixed-method needs analysis, using both quantitative and qualitative approaches to determine organizational leadership needs. Eight design iterations were conducted, including three user testing phases. Three re-design iterations followed initial development, enabling game testing as part of comprehensive instructional events. Subsequent design, development and testing processes targeted digital application to a computer- and tablet-based environment. Recommendations include pros and cons of development and learner testing of an initial analog simulation prior to full digital simulation development.

  3. Comparison of iterative inverse coarse-graining methods

    NASA Astrophysics Data System (ADS)

    Rosenberger, David; Hanke, Martin; van der Vegt, Nico F. A.

    2016-10-01

    Deriving potentials for coarse-grained Molecular Dynamics (MD) simulations is frequently done by solving an inverse problem. Methods like Iterative Boltzmann Inversion (IBI) or Inverse Monte Carlo (IMC) have been widely used to solve this problem. The solution obtained by application of these methods guarantees a match in the radial distribution function (RDF) between the underlying fine-grained system and the derived coarse-grained system. However, these methods often fail in reproducing thermodynamic properties. To overcome this deficiency, additional thermodynamic constraints such as pressure or Kirkwood-Buff integrals (KBI) may be added to these methods. In this communication we test the ability of these methods to converge to a known solution of the inverse problem. With this goal in mind we have studied a binary mixture of two simple Lennard-Jones (LJ) fluids, in which no actual coarse-graining is performed. We further discuss whether full convergence is actually needed to achieve thermodynamic representability.

  4. Exploration and extension of an improved Riemann track fitting algorithm

    NASA Astrophysics Data System (ADS)

    Strandlie, A.; Frühwirth, R.

    2017-09-01

    Recently, a new Riemann track fit which operates on translated and scaled measurements has been proposed. This study shows that the new Riemann fit is virtually as precise as popular approaches such as the Kalman filter or an iterative non-linear track fitting procedure, and significantly more precise than other, non-iterative circular track fitting approaches over a large range of measurement uncertainties. The fit is then extended in two directions: first, the measurements are allowed to lie on plane sensors of arbitrary orientation; second, the full error propagation from the measurements to the estimated circle parameters is computed. The covariance matrix of the estimated track parameters can therefore be computed without recourse to asymptotic properties, and is consequently valid for any number of observation. It does, however, assume normally distributed measurement errors. The calculations are validated on a simulated track sample and show excellent agreement with the theoretical expectations.

  5. Fast Ion Effects During Test Blanket Module Simulation Experiments in DIII-D

    NASA Astrophysics Data System (ADS)

    Kramer, G. J.; Budny, R.; Nazikian, R.; Heidbrink, W. W.; Kurki-Suonio, T.; Salmi, A.; Schaffer, M. J.; van Zeeland, M. A.; Shinohara, K.; Snipes, J. A.; Spong, D.

    2010-11-01

    The fast beam-ion confinement in the presence of a scaled mock-up of two Test Blanket Modules (TBM) for ITER was studied in DIII-D. The TBM on DIII-D has four vertically arranged protective carbon tiles with thermocouples placed at the back of each tile. Temperature increases of up to 200^oC were measured for the two tiles closest to the midplane when the TBM fields were present. These measurements agree qualitatively with results from the full orbit-following beam-ion code, SPIRAL, that predict beam-ion losses to be localized on the central two carbon tiles when the TBM fields present. Within the experimental uncertainties no significant change in the fast-ion population was found in the core of these plasmas which is consistent with SPIRAL analysis. These experiments indicate that the TBM fields do not affect the fast-ion confinement in a harmful way which is good news for ITER.

  6. Chemistry-split techniques for viscous reactive blunt body flow computations

    NASA Technical Reports Server (NTRS)

    Li, C. P.

    1987-01-01

    The weak-coupling structure between the fluid and species equations has been exploited and resulted in three, closely related, time-iterative implicit techniques. While the primitive variables are solved in two separated groups and each by an Alternating Direction Implicit (ADI) factorization scheme, the rate-species Jacobian can be treated in either full or diagonal matrix form, or simply ignored. The latter two versions render the split technique to solving for species as scalar rather than vector variables. The solution is completed at the end of each iteration after determining temperature and pressure from the flow density, energy and species concentrations. Numerical experimentation has shown that the split scalar technique, using partial rate Jacobian, yields the best overall stability and consistency. Satisfactory viscous solutions were obtained for an ellipsoidal body of axis ratio 3:1 at Mach 35 and an angle of attack of 20 degrees.

  7. Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method. PMID:25873987

  8. Motion estimation using the firefly algorithm in ultrasonic image sequence of soft tissue.

    PubMed

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.

  9. Strategies for the coupling of global and local crystal growth models

    NASA Astrophysics Data System (ADS)

    Derby, Jeffrey J.; Lun, Lisa; Yeckel, Andrew

    2007-05-01

    The modular coupling of existing numerical codes to model crystal growth processes will provide for maximum effectiveness, capability, and flexibility. However, significant challenges are posed to make these coupled models mathematically self-consistent and algorithmically robust. This paper presents sample results from a coupling of the CrysVUn code, used here to compute furnace-scale heat transfer, and Cats2D, used to calculate melt fluid dynamics and phase-change phenomena, to form a global model for a Bridgman crystal growth system. However, the strategy used to implement the CrysVUn-Cats2D coupling is unreliable and inefficient. The implementation of under-relaxation within a block Gauss-Seidel iteration is shown to be ineffective for improving the coupling performance in a model one-dimensional problem representative of a melt crystal growth model. Ideas to overcome current convergence limitations using approximations to a full Newton iteration method are discussed.

  10. High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI

    NASA Astrophysics Data System (ADS)

    Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer

    2011-03-01

    Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.

  11. WIND: Computer program for calculation of three dimensional potential compressible flow about wind turbine rotor blades

    NASA Technical Reports Server (NTRS)

    Dulikravich, D. S.

    1980-01-01

    A computer program is presented which numerically solves an exact, full potential equation (FPE) for three dimensional, steady, inviscid flow through an isolated wind turbine rotor. The program automatically generates a three dimensional, boundary conforming grid and iteratively solves the FPE while fully accounting for both the rotating cascade and Coriolis effects. The numerical techniques incorporated involve rotated, type dependent finite differencing, a finite volume method, artificial viscosity in conservative form, and a successive line overrelaxation combined with the sequential grid refinement procedure to accelerate the iterative convergence rate. Consequently, the WIND program is capable of accurately analyzing incompressible and compressible flows, including those that are locally transonic and terminated by weak shocks. The program can also be used to analyze the flow around isolated aircraft propellers and helicopter rotors in hover as long as the total relative Mach number of the oncoming flow is subsonic.

  12. Marching iterative methods for the parabolized and thin layer Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Israeli, M.

    1985-01-01

    Downstream marching iterative schemes for the solution of the Parabolized or Thin Layer (PNS or TL) Navier-Stokes equations are described. Modifications of the primitive equation global relaxation sweep procedure result in efficient second-order marching schemes. These schemes take full account of the reduced order of the approximate equations as they behave like the SLOR for a single elliptic equation. The improved smoothing properties permit the introduction of Multi-Grid acceleration. The proposed algorithm is essentially Reynolds number independent and therefore can be applied to the solution of the subsonic Euler equations. The convergence rates are similar to those obtained by the Multi-Grid solution of a single elliptic equation; the storage is also comparable as only the pressure has to be stored on all levels. Extensions to three-dimensional and compressible subsonic flows are discussed. Numerical results are presented.

  13. The Effect of Iteration on the Design Performance of Primary School Children

    ERIC Educational Resources Information Center

    Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.

    2015-01-01

    Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…

  14. Efficient Storage Scheme of Covariance Matrix during Inverse Modeling

    NASA Astrophysics Data System (ADS)

    Mao, D.; Yeh, T. J.

    2013-12-01

    During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.

  15. Applying the scientific method to small catchment studies: Areview of the Panola Mountain experience

    USGS Publications Warehouse

    Hooper, R.P.

    2001-01-01

    A hallmark of the scientific method is its iterative application to a problem to increase and refine the understanding of the underlying processes controlling it. A successful iterative application of the scientific method to catchment science (including the fields of hillslope hydrology and biogeochemistry) has been hindered by two factors. First, the scale at which controlled experiments can be performed is much smaller than the scale of the phenomenon of interest. Second, computer simulation models generally have not been used as hypothesis-testing tools as rigorously as they might have been. Model evaluation often has gone only so far as evaluation of goodness of fit, rather than a full structural analysis, which is more useful when treating the model as a hypothesis. An iterative application of a simple mixing model to the Panola Mountain Research Watershed is reviewed to illustrate the increase in understanding gained by this approach and to discern general principles that may be applicable to other studies. The lessons learned include the need for an explicitly stated conceptual model of the catchment, the definition of objective measures of its applicability, and a clear linkage between the scale of observations and the scale of predictions. Published in 2001 by John Wiley & Sons. Ltd.

  16. Using AORSA to simulate helicon waves in DIIID and ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lau, Cornwall H; Jaeger, E. F.; Berry, Lee Alan

    2014-01-01

    Recent efforts by Vdovin [1] and Prater [2] have shown that helicon waves (fast waves at ~30 ion cyclotron frequency harmonic) may be an attractive option for driving efficient off-axis current drive during non-inductive tokamak operation for DIIID, ITER and DEMO. For DIIID scenarios, the ray tracing code GENRAY has been extensively used to study helicon current drive efficiency and location as a function many plasma parameters. has some limitations on absorption at high cyclotron harmonics, so the full wave code AORSA, which is applicable to arbitrary Larmor radius and can therefore resolve high ion cyclotron harmonics, has been recentlymore » used to validate the GENRAY model. It will be shown that the GENRAY and AORSA driven current drive profiles are comparable for the envisioned high temperature and density advanced scenarios for DIIID, where there is high single pass absorption due to electron Landau damping. AORSA results will be shown for various plasma parameters for DIIID and for ITER. Computational difficulties in achieving these AORSA results will also be discussed. * Work supported by USDOE Contract No. DE-AC05-00OR22725 [1] V. L. Vdovin, Plasma Physics Reports, V.39, No.2, 2013 [2] R. Prater et al, Nucl. Fusion, 52, 083024, 2014« less

  17. A long-term target detection approach in infrared image sequence

    NASA Astrophysics Data System (ADS)

    Li, Hang; Zhang, Qi; Wang, Xin; Hu, Chao

    2016-10-01

    An automatic target detection method used in long term infrared (IR) image sequence from a moving platform is proposed. Firstly, based on POME(the principle of maximum entropy), target candidates are iteratively segmented. Then the real target is captured via two different selection approaches. At the beginning of image sequence, the genuine target with litter texture is discriminated from other candidates by using contrast-based confidence measure. On the other hand, when the target becomes larger, we apply online EM method to estimate and update the distributions of target's size and position based on the prior detection results, and then recognize the genuine one which satisfies both the constraints of size and position. Experimental results demonstrate that the presented method is accurate, robust and efficient.

  18. Approximated affine projection algorithm for feedback cancellation in hearing aids.

    PubMed

    Lee, Sangmin; Kim, In-Young; Park, Young-Cheol

    2007-09-01

    We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.

  19. Extension of the MIRS computer package for the modeling of molecular spectra: From effective to full ab initio ro-vibrational Hamiltonians in irreducible tensor form

    NASA Astrophysics Data System (ADS)

    Nikitin, A. V.; Rey, M.; Champion, J. P.; Tyuterev, Vl. G.

    2012-07-01

    The MIRS software for the modeling of ro-vibrational spectra of polyatomic molecules was considerably extended and improved. The original version [Nikitin AV, Champion JP, Tyuterev VlG. The MIRS computer package for modeling the rovibrational spectra of polyatomic molecules. J Quant Spectrosc Radiat Transf 2003;82:239-49.] was especially designed for separate or simultaneous treatments of complex band systems of polyatomic molecules. It was set up in the frame of effective polyad models by using algorithms based on advanced group theory algebra to take full account of symmetry properties. It has been successfully used for predictions and data fitting (positions and intensities) of numerous spectra of symmetric and spherical top molecules within the vibration extrapolation scheme. The new version offers more advanced possibilities for spectra calculations and modeling by getting rid of several previous limitations particularly for the size of polyads and the number of tensors involved. It allows dealing with overlapping polyads and includes more efficient and faster algorithms for the calculation of coefficients related to molecular symmetry properties (6C, 9C and 12C symbols for C3v, Td, and Oh point groups) and for better convergence of least-square-fit iterations as well. The new version is not limited to polyad effective models. It also allows direct predictions using full ab initio ro-vibrational normal mode Hamiltonians converted into the irreducible tensor form. Illustrative examples on CH3D, CH4, CH3Cl, CH3F and PH3 are reported reflecting the present status of data available. It is written in C++ for standard PC computer operating under Windows. The full package including on-line documentation and recent data are freely available at http://www.iao.ru/mirs/mirs.htm or http://xeon.univ-reims.fr/Mirs/ or http://icb.u-bourgogne.fr/OMR/SMA/SHTDS/MIRS.html and as supplementary data from the online version of the article.

  20. Testing Update on 20 and 25-Ah Lithium Ion Cells

    NASA Technical Reports Server (NTRS)

    Bruce, Gregg C.; Mardikian, Pamella; Edwards, Sherri; Bugga, Kumar; Chin, Keith; Smart, Marshall; Surampudi, Subbarao

    2003-01-01

    Eagle-Picher Energy Products has worked on lithium ion batteries for approximately 8 years. During that period EPEPC developed and delivered several cell sizes on a program funded by the USAF and Canadian DND. Designs are wound cylindrical cells from 7 to 40-Ah. Most cells delivered were approximately 25-Ah due to requirements of Mars missions. Several iterations of cells were manufactured and delivered for evaluation. The first design was 20-Ah, Design I, and the second was a 25-Ah, Design II.

  1. Ordering of Glass Rods in Nematic and Cholesteric Liquid Crystals

    DTIC Science & Technology

    2011-12-01

    3), 483–508 (2007). 2. M. D. Lynch and D. L. Patrick, “Controlling the orientation of micron-sized rod-shaped SiC particles with nematic liquid...Elastic torque and the levitation of metal wires by a nematic liquid crystal,” Science 303(5658), 652–655 (2004). 17. R. Eelkema, M. M. Pollard, J...Building Blocks for Iterative Methods, 2nd ed. (SIAM, 1994). 1. Introduction Incorporating rod-like particles into liquid crystal (LC) media can lead

  2. Scale-Up: Improving Large Enrollment Physics Courses

    NASA Astrophysics Data System (ADS)

    Beichner, Robert

    1999-11-01

    The Student-Centered Activities for Large Enrollment University Physics (SCALE-UP) project is working to establish a learning environment that will promote increased conceptual understanding, improved problem-solving performance, and greater student satisfaction, while still maintaining class sizes of approximately 100. We are also addressing the new ABET engineering accreditation requirements for inquiry-based learning along with communication and team-oriented skills development. Results of studies of our latest classroom design, plans for future classroom space, and the current iteration of instructional materials will be discussed.

  3. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Hanquan

    2016-03-01

    In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.

  4. Comparison of minitrampoline- and full-sized trampoline-related injuries in the United States, 1990-2002.

    PubMed

    Shields, Brenda J; Fernandez, Soledad A; Smith, Gary A

    2005-07-01

    To compare mini- and full-sized trampoline-related injuries in the United States. A retrospective analysis of data was conducted for all ages from the National Electronic Injury Surveillance System (NEISS) of the US Consumer Product Safety Commission from 1990 to 2002. We compared 137 minitrampoline-related injuries with 143 full-sized trampoline-related injuries, randomly selected from all full-sized trampoline-related injuries reported to the NEISS during the study period. Patients ranged in age from 1 to 80 years (mean [SD]: 13.9 [17.7]) and 2 to 52 years (mean [SD]: 11.0 [8.0]) for mini- and full-sized trampoline-related injuries, respectively. Most patients were younger than 18 years (82% mini, 91% full-sized). Thirty-two percent of minitrampoline- and 19% of full-sized trampoline-related injuries were to children who were younger than 6 years; girls predominated (63% mini, 51% full-sized). Children who were younger than 6 years were more likely to be injured on a minitrampoline than on a full-sized trampoline, when compared with 6- to 17-year-olds (odds ratio [OR]: 2.43; 95% confidence interval [CI]: 1.33-4.47). The majority of injuries occurred at home (87% mini, 89% full-sized). All patients who were injured on a minitrampoline were treated and released, whereas 5% of patients who were injured on a full-sized trampoline were admitted to the hospital. On minitrampolines, children who were younger than 6 years were at risk for head lacerations (OR: 4.98; 95% CI: 1.71-16.03), and children who were 6 to 17 years were at risk for lower extremity strains or sprains (OR: 6.26; 95% CI: 1.35-59.14). Children who were 6 to 17 years and injured on a full-sized trampoline were at risk for lower extremity strains or sprains (OR: 4.85; 95% CI: 1.09-44.93). Lower extremity strains or sprains were the most common injury sustained by adults (18 years and older; 33% mini, 15% full-sized). Injury patterns were similar for mini- and full-sized trampolines, although minitrampoline-related injuries were less likely to require admission to the hospital and more commonly resulted in head lacerations among children who were younger than 6 years. Risk for injury could not be determined because of the lack of data regarding duration of exposure to risk. We therefore conclude that the use of full-sized trampolines by children should follow the policy recommendations of the American Academy of Pediatrics. Trampolines, including minitrampolines, should be regarded as training devices and not as toys. Until more data are available regarding exposure to risk, we caution against the use of the minitrampoline as a play device by children in the home, which is where most minitrampoline-related injuries occur.

  5. Surface damage and structure evolution of recrystallized tungsten exposed to ELM-like transient loads

    NASA Astrophysics Data System (ADS)

    Yuan, Y.; Du, J.; Wirtz, M.; Luo, G.-N.; Lu, G.-H.; Liu, W.

    2016-03-01

    Surface damage and structure evolution of the full tungsten ITER divertor under transient heat loads is a key concern for component lifetime and plasma operations. Recrystallization caused by transients and steady-state heat loads can lead to degradation of the material properties and is therefore one of the most serious issues for tungsten armor. In order to investigate the thermal response of the recrystallized tungsten under edge localized mode-like transient thermal loads, fully recrystallized tungsten samples with different average grain sizes are exposed to cyclic thermal shocks in the electron beam facility JUDITH 1. The results indicate that not only does the microstructure change due to recrystallization, but that the surface residual stress induced by mechanical polishing strongly influences the surface cracking behavior. The stress-free surface prepared by electro-polishing is shown to be more resistant to cracking than the mechanically polished one. The resulting surface roughness depends largely on the loading conditions instead of the recrystallized-grain size. As the base temperature increases from room temperature to 400 °C, surface roughening mainly due to the shear bands in each grain becomes more pronounced, and sub-grains (up to 3 μm) are simultaneously formed in the sub-surface. The directions of the shear bands exhibit strong grain-orientation dependence, and they are generally aligned with the traces of {1 1 2} twin habit planes. The results suggest that twinning deformation and dynamic recrystallization represent the predominant mechanism for surface roughening and related microstructure evolution.

  6. Activation Product Inverse Calculations with NDI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, Mark Girard

    NDI based forward calculations of activation product concentrations can be systematically used to infer structural element concentrations from measured activation product concentrations with an iterative algorithm. The algorithm converges exactly for the basic production-depletion chain with explicit activation product production and approximately, in the least-squares sense, for the full production-depletion chain with explicit activation product production and nosub production-depletion chain. The algorithm is suitable for automation.

  7. Fast time- and frequency-domain finite-element methods for electromagnetic analysis

    NASA Astrophysics Data System (ADS)

    Lee, Woochan

    Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.

  8. Iterative Addition of Kinetic Effects to Cold Plasma RF Wave Solvers

    NASA Astrophysics Data System (ADS)

    Green, David; Berry, Lee; RF-SciDAC Collaboration

    2017-10-01

    The hot nature of fusion plasmas requires a wave vector dependent conductivity tensor for accurate calculation of wave heating and current drive. Traditional methods for calculating the linear, kinetic full-wave plasma response rely on a spectral method such that the wave vector dependent conductivity fits naturally within the numerical method. These methods have seen much success for application to the well-confined core plasma of tokamaks. However, quantitative prediction of high power RF antenna designs for fusion applications has meant a requirement of resolving the geometric details of the antenna and other plasma facing surfaces for which the Fourier spectral method is ill-suited. An approach to enabling the addition of kinetic effects to the more versatile finite-difference and finite-element cold-plasma full-wave solvers was presented by where an operator-split iterative method was outlined. Here we expand on this approach, examine convergence and present a simplified kinetic current estimator for rapidly updating the right-hand side of the wave equation with kinetic corrections. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  9. A contrast source method for nonlinear acoustic wave fields in media with spatially inhomogeneous attenuation.

    PubMed

    Demi, L; van Dongen, K W A; Verweij, M D

    2011-03-01

    Experimental data reveals that attenuation is an important phenomenon in medical ultrasound. Attenuation is particularly important for medical applications based on nonlinear acoustics, since higher harmonics experience higher attenuation than the fundamental. Here, a method is presented to accurately solve the wave equation for nonlinear acoustic media with spatially inhomogeneous attenuation. Losses are modeled by a spatially dependent compliance relaxation function, which is included in the Westervelt equation. Introduction of absorption in the form of a causal relaxation function automatically results in the appearance of dispersion. The appearance of inhomogeneities implies the presence of a spatially inhomogeneous contrast source in the presented full-wave method leading to inclusion of forward and backward scattering. The contrast source problem is solved iteratively using a Neumann scheme, similar to the iterative nonlinear contrast source (INCS) method. The presented method is directionally independent and capable of dealing with weakly to moderately nonlinear, large scale, three-dimensional wave fields occurring in diagnostic ultrasound. Convergence of the method has been investigated and results for homogeneous, lossy, linear media show full agreement with the exact results. Moreover, the performance of the method is demonstrated through simulations involving steered and unsteered beams in nonlinear media with spatially homogeneous and inhomogeneous attenuation. © 2011 Acoustical Society of America

  10. 77 FR 43811 - Submission for OMB Review; Comment Request-Safety Standards for Full-Size Baby Cribs and Non-Full...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-26

    ... CONSUMER PRODUCT SAFETY COMMISSION Submission for OMB Review; Comment Request--Safety Standards for Full-Size Baby Cribs and Non-Full-Size Baby Cribs; Compliance Form AGENCY: Consumer Product Safety Commission. ACTION: Notice. SUMMARY: The Consumer Product Safety Commission (CPSC or Commission) announces...

  11. Final Report on ITER Task Agreement 81-08

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard L. Moore

    As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of themore » ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.« less

  12. MO-FG-204-03: Using Edge-Preserving Algorithm for Significantly Improved Image-Domain Material Decomposition in Dual Energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, W; Niu, T; Xing, L

    2015-06-15

    Purpose: To significantly improve dual energy CT (DECT) imaging by establishing a new theoretical framework of image-domain material decomposition with incorporation of edge-preserving techniques. Methods: The proposed algorithm, HYPR-NLM, combines the edge-preserving non-local mean filter (NLM) with the HYPR-LR (Local HighlY constrained backPRojection Reconstruction) framework. Image denoising using HYPR-LR framework depends on the noise level of the composite image which is the average of the different energy images. For DECT, the composite image is the average of high- and low-energy images. To further reduce noise, one may want to increase the window size of the filter of the HYPR-LR, leadingmore » resolution degradation. By incorporating the NLM filtering and the HYPR-LR framework, HYPR-NLM reduces the boost material decomposition noise using energy information redundancies as well as the non-local mean. We demonstrate the noise reduction and resolution preservation of the algorithm with both iodine concentration numerical phantom and clinical patient data by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). Results: The results show iterative material decomposition method reduces noise to the lowest level and provides improved DECT images. HYPR-NLM significantly reduces noise while preserving the accuracy of quantitative measurement and resolution. For the iodine concentration numerical phantom, the averaged noise levels are about 2.0, 0.7, 0.2 and 0.4 for direct inversion, HYPR-LR, Iter- DECT and HYPR-NLM, respectively. For the patient data, the noise levels of the water images are about 0.36, 0.16, 0.12 and 0.13 for direct inversion, HYPR-LR, Iter-DECT and HYPR-NLM, respectively. Difference images of both HYPR-LR and Iter-DECT show edge effect, while no significant edge effect is shown for HYPR-NLM, suggesting spatial resolution is well preserved for HYPR-NLM. Conclusion: HYPR-NLM provides an effective way to reduce the generic magnified image noise of dual–energy material decomposition while preserving resolution. This work is supported in part by NIH grants 7R01HL111141 and 1R01-EB016777. This work is also supported by the Natural Science Foundation of China (NSFC Grant No. 81201091), Fundamental Research Funds for the Central Universities in China, and Fund Project for Excellent Abroad Scholar Personnel in Science and Technology.« less

  13. ITER Construction—Plant System Integration

    NASA Astrophysics Data System (ADS)

    Tada, E.; Matsuda, S.

    2009-02-01

    This brief paper introduces how the ITER will be built in the international collaboration. The ITER Organization plays a central role in constructing ITER and leading it into operation. Since most of the ITER components are to be provided in-kind from the member countries, integral project management should be scoped in advance of real work. Those include design, procurement, system assembly, testing, licensing and commissioning of ITER.

  14. NDARC NASA Design and Analysis of Rotorcraft - Input, Appendix 4

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2016-01-01

    The NDARC code performs design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance analysis, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. The principal tasks (sizing, mission analysis, flight performance analysis) are shown in the figure as boxes with heavy borders. Heavy arrows show control of subordinate tasks. The aircraft description consists of all the information, input and derived, that denes the aircraft. The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. This information can be the result of the sizing task; can come entirely from input, for a fixed model; or can come from the sizing task in a previous case or previous job. The aircraft description information is available to all tasks and all solutions. The sizing task determines the dimensions, power, and weight of a rotorcraft that can perform a specified set of design conditions and missions. The aircraft size is characterized by parameters such as design gross weight, weight empty, rotor radius, and engine power available. The relations between dimensions, power, and weight generally require an iterative solution. From the design flight conditions and missions, the task can determine the total engine power or the rotor radius (or both power and radius can be fixed), as well as the design gross weight, maximum takeoff weight, drive system torque limit, and fuel tank capacity. For each propulsion group, the engine power or the rotor radius can be sized. Missions are defined for the sizing task, and for the mission performance analysis. A mission consists of a number of mission segments, for which time, distance, and fuel burn are evaluated. For the sizing task, certain missions are designated to be used for design gross weight calculations; for transmission sizing; and for fuel tank sizing. The mission parameters include mission takeoff gross weight and useful load. For specified takeoff fuel weight with adjustable segments, the mission time or distance is adjusted so the fuel required for the mission equals the takeoff fuel weight. The mission iteration is on fuel weight or energy. Flight conditions are specified for the sizing task, and for the flight performance analysis. For the sizing task, certain flight conditions are designated to be used for design gross weight calculations; for transmission sizing; for maximum takeoff weight calculations; and for anti-torque or auxiliary thrust rotor sizing. The flight condition parameters include gross weight and useful load. For flight conditions and mission takeoff, the gross weight can be maximized, such that the power required equals the power available. A flight state is defined for each mission segment and each flight condition. The aircraft performance can be analyzed for the specified state, or a maximum effort performance can be identified. The maximum effort is specified in terms of a quantity such as best endurance or best range, and a variable such as speed, rate of climb, or altitude.

  15. Formative evaluation of a mobile liquid portion size estimation interface for people with varying literacy skills.

    PubMed

    Chaudry, Beenish Moalla; Connelly, Kay; Siek, Katie A; Welch, Janet L

    2013-12-01

    Chronically ill people, especially those with low literacy skills, often have difficulty estimating portion sizes of liquids to help them stay within their recommended fluid limits. There is a plethora of mobile applications that can help people monitor their nutritional intake but unfortunately these applications require the user to have high literacy and numeracy skills for portion size recording. In this paper, we present two studies in which the low- and the high-fidelity versions of a portion size estimation interface, designed using the cognitive strategies adults employ for portion size estimation during diet recall studies, was evaluated by a chronically ill population with varying literacy skills. The low fidelity interface was evaluated by ten patients who were all able to accurately estimate portion sizes of various liquids with the interface. Eighteen participants did an in situ evaluation of the high-fidelity version incorporated in a diet and fluid monitoring mobile application for 6 weeks. Although the accuracy of the estimation cannot be confirmed in the second study but the participants who actively interacted with the interface showed better health outcomes by the end of the study. Based on these findings, we provide recommendations for designing the next iteration of an accurate and low literacy-accessible liquid portion size estimation mobile interface.

  16. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.

    2014-08-21

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less

  17. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    NASA Astrophysics Data System (ADS)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.

    2014-08-01

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.

  18. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.

  19. Optimization design combined with coupled structural-electrostatic analysis for the electrostatically controlled deployable membrane reflector

    NASA Astrophysics Data System (ADS)

    Liu, Chao; Yang, Guigeng; Zhang, Yiqun

    2015-01-01

    The electrostatically controlled deployable membrane reflector (ECDMR) is a promising scheme to construct large size and high precision space deployable reflector antennas. This paper presents a novel design method for the large size and small F/D ECDMR considering the coupled structure-electrostatic problem. First, the fully coupled structural-electrostatic system is described by a three field formulation, in which the structure and passive electrical field is modeled by finite element method, and the deformation of the electrostatic domain is predicted by a finite element formulation of a fictitious elastic structure. A residual formulation of the structural-electrostatic field finite element model is established and solved by Newton-Raphson method. The coupled structural-electrostatic analysis procedure is summarized. Then, with the aid of this coupled analysis procedure, an integrated optimization method of membrane shape accuracy and stress uniformity is proposed, which is divided into inner and outer iterative loops. The initial state of relatively high shape accuracy and uniform stress distribution is achieved by applying the uniform prestress on the membrane design shape and optimizing the voltages, in which the optimal voltage is computed by a sensitivity analysis. The shape accuracy is further improved by the iterative prestress modification using the reposition balance method. Finally, the results of the uncoupled and coupled methods are compared and the proposed optimization method is applied to design an ECDMR. The results validate the effectiveness of this proposed methods.

  20. Creation of an in vitro biomechanical model of the trachea using rapid prototyping.

    PubMed

    Walenga, Ross L; Longest, P Worth; Sundaresan, Gobalakrishnan

    2014-06-03

    Previous in vitro models of the airways are either rigid or, if flexible, have not matched in vivo compliance characteristics. Rapid prototyping provides a quickly evolving approach that can be used to directly produce in vitro airway models using either rigid or flexible polymers. The objective of this study was to use rapid prototyping to directly produce a flexible hollow model that matches the biomechanical compliance of the trachea. The airway model consisted of a previously developed characteristic mouth-throat region, the trachea, and a portion of the main bronchi. Compliance of the tracheal region was known from a previous in vivo imaging study that reported cross-sectional areas over a range of internal pressures. The compliance of the tracheal region was matched to the in vivo data for a specific flexible resin by iteratively selecting the thicknesses and other dimensions of tracheal wall components. Seven iterative models were produced and illustrated highly non-linear expansion consisting of initial rapid size increase, a transition region, and continued slower size increase as pressure was increased. Thickness of the esophageal interface membrane and initial trachea indention were identified as key parameters with the final model correctly predicting all phases of expansion within a value of 5% of the in vivo data. Applications of the current biomechanical model are related to endotracheal intubation and include determination of effective mucus suctioning and evaluation of cuff sealing with respect to gases and secretions. Copyright © 2014 Elsevier Ltd. All rights reserved.

Top