Sample records for iter reference source

  1. Investigation of Helicon discharges as RF coupling concept of negative hydrogen ion sources

    NASA Astrophysics Data System (ADS)

    Briefi, S.; Fantz, U.

    2013-02-01

    The ITER reference source for H- and D- requires a high RF input power (up to 90 kW per driver). To reduce the demands on the RF circuit, it is highly desirable to reduce the power consumption while retaining the values of the relevant plasma parameters namely the positive ion density and the atomic hydrogen density. Helicon plasmas are a promising alternative RF coupling concept but they are typically generated in long thin discharge tubes using rare gases and an RF frequency of 13.56 MHz. Hence the applicability to the ITER reference source geometry, frequency and the utilization of hydrogen/deuterium has to be proved. In this paper the strategy of the approach for using Helicon discharges for ITER reference source parameters is introduced and the first promising measurements which were carried out at a small laboratory experiment are presented. With increasing RF power a mode transition to the Helicon regime was observed for argon and argon/hydrogen mixtures. In pure hydrogen/deuterium the mode transition could not yet be achieved as the available RF power is too low. In deuterium a special feature of Helicon discharges, the socalled low field peak, could be observed at a moderate B-field of 3 mT.

  2. Evaluation of power transfer efficiency for a high power inductively coupled radio-frequency hydrogen ion source

    NASA Astrophysics Data System (ADS)

    Jain, P.; Recchia, M.; Cavenago, M.; Fantz, U.; Gaio, E.; Kraus, W.; Maistrello, A.; Veltri, P.

    2018-04-01

    Neutral beam injection (NBI) for plasma heating and current drive is necessary for International Thermonuclear Experimental reactor (ITER) tokamak. Due to its various advantages, a radio frequency (RF) driven plasma source type was selected as a reference ion source for the ITER heating NBI. The ITER relevant RF negative ion sources are inductively coupled (IC) devices whose operational working frequency has been chosen to be 1 MHz and are characterized by high RF power density (˜9.4 W cm-3) and low operational pressure (around 0.3 Pa). The RF field is produced by a coil in a cylindrical chamber leading to a plasma generation followed by its expansion inside the chamber. This paper recalls different concepts based on which a methodology is developed to evaluate the efficiency of the RF power transfer to hydrogen plasma. This efficiency is then analyzed as a function of the working frequency and in dependence of other operating source and plasma parameters. The study is applied to a high power IC RF hydrogen ion source which is similar to one simplified driver of the ELISE source (half the size of the ITER NBI source).

  3. Performance of multi-aperture grid extraction systems for an ITER-relevant RF-driven negative hydrogen ion source

    NASA Astrophysics Data System (ADS)

    Franzen, P.; Gutser, R.; Fantz, U.; Kraus, W.; Falter, H.; Fröschle, M.; Heinemann, B.; McNeely, P.; Nocentini, R.; Riedl, R.; Stäbler, A.; Wünderlich, D.

    2011-07-01

    The ITER neutral beam system requires a negative hydrogen ion beam of 48 A with an energy of 0.87 MeV, and a negative deuterium beam of 40 A with an energy of 1 MeV. The beam is extracted from a large ion source of dimension 1.9 × 0.9 m2 by an acceleration system consisting of seven grids with 1280 apertures each. Currently, apertures with a diameter of 14 mm in the first grid are foreseen. In 2007, the IPP RF source was chosen as the ITER reference source due to its reduced maintenance compared with arc-driven sources and the successful development at the BATMAN test facility of being equipped with the small IPP prototype RF source ( {\\sim}\\frac{1}{8} of the area of the ITER NBI source). These results, however, were obtained with an extraction system with 8 mm diameter apertures. This paper reports on the comparison of the source performance at BATMAN of an ITER-relevant extraction system equipped with chamfered apertures with a 14 mm diameter and 8 mm diameter aperture extraction system. The most important result is that there is almost no difference in the achieved current density—being consistent with ion trajectory calculations—and the amount of co-extracted electrons. Furthermore, some aspects of the beam optics of both extraction systems are discussed.

  4. Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER

    NASA Astrophysics Data System (ADS)

    Schunke, B.; Bora, D.; Hemsworth, R.; Tanga, A.

    2009-03-01

    The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D- and capable of delivering 16.5 MW of D0 to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option [1]. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H- to 100 keV will inject ≈15 A equivalent of H0 for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion source as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D- and H- current densities as well as long-pulse operation [2, 3]. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R&D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start-up imposed specific requirements for operating the HNB's at full beam power. It has been decided to address the shinethrough issue by installing wall armour protection, which increases the operational space in all scenarios. Other NB related issues identified by the Design Review process will be discussed and the possible changes to the ITER baseline indicated.

  5. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    PubMed

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  6. Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunke, B.; Bora, D.; Hemsworth, R.

    2009-03-12

    The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D{sup -} and capable of delivering 16.5 MW of D{sup 0} to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H{sup -} to 100 keV will inject {approx_equal}15 A equivalent of H{sup 0} for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion sourcemore » as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D{sup -} and H{sup -} current densities as well as long-pulse operation. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R and D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start-up imposed specific requirements for operating the HNB's at full beam power. It has been decided to address the shinethrough issue by installing wall armour protection, which increases the operational space in all scenarios. Other NB related issues identified by the Design Review process will be discussed and the possible changes to the ITER baseline indicated.« less

  7. Conceptual design of data acquisition and control system for two Rf driver based negative ion source for fusion R&D

    NASA Astrophysics Data System (ADS)

    Soni, Jigensh; Yadav, R. K.; Patel, A.; Gahlaut, A.; Mistry, H.; Parmar, K. G.; Mahesh, V.; Parmar, D.; Prajapati, B.; Singh, M. J.; Bandyopadhyay, M.; Bansal, G.; Pandya, K.; Chakraborty, A.

    2013-02-01

    Twin Source - An Inductively coupled two RF driver based 180 kW, 1 MHz negative ion source experimental setup is initiated at IPR, Gandhinagar, under Indian program, with the objective of understanding the physics and technology of multi-driver coupling. Twin Source [1] (TS) also provides an intermediate platform between operational ROBIN [2] [5] and eight RF drivers based Indian test facility -INTF [3]. A twin source experiment requires a central system to provide control, data acquisition and communication interface, referred as TS-CODAC, for which a software architecture similar to ITER CODAC core system has been decided for implementation. The Core System is a software suite for ITER plant system manufacturers to use as a template for the development of their interface with CODAC. The ITER approach, in terms of technology, has been adopted for the TS-CODAC so as to develop necessary expertise for developing and operating a control system based on the ITER guidelines as similar configuration needs to be implemented for the INTF. This cost effective approach will provide an opportunity to evaluate and learn ITER CODAC technology, documentation, information technology and control system processes, on an operational machine. Conceptual design of the TS-CODAC system has been completed. For complete control of the system, approximately 200 Nos. control signals and 152 acquisition signals are needed. In TS-CODAC, control loop time required is within the range of 5ms - 10 ms, therefore for the control system, PLC (Siemens S-7 400) has been chosen as suggested in the ITER slow controller catalog. For the data acquisition, the maximum sampling interval required is 100 micro second, and therefore National Instruments (NI) PXIe system and NI 6259 digitizer cards have been selected as suggested in the ITER fast controller catalog. This paper will present conceptual design of TS -CODAC system based on ITER CODAC Core software and applicable plant system integration processes.

  8. RF Negative Ion Source Development at IPP Garching

    NASA Astrophysics Data System (ADS)

    Kraus, W.; McNeely, P.; Berger, M.; Christ-Koch, S.; Falter, H. D.; Fantz, U.; Franzen, P.; Fröschle, M.; Heinemann, B.; Leyer, S.; Riedl, R.; Speth, E.; Wünderlich, D.

    2007-08-01

    IPP Garching is heavily involved in the development of an ion source for Neutral Beam Heating of the ITER Tokamak. RF driven ion sources have been successfully developed and are in operation on the ASDEX-Upgrade Tokamak for positive ion based NBH by the NB Heating group at IPP Garching. Building on this experience a RF driven H- ion source has been under development at IPP Garching as an alternative to the ITER reference design ion source. The number of test beds devoted to source development for ITER has increased from one (BATMAN) by the addition of two test beds (MANITU, RADI). This paper contains descriptions of the three test beds. Results on diagnostic development using laser photodetachment and cavity ringdown spectroscopy are given for BATMAN. The latest results for long pulse development on MANITU are presented including the to date longest pulse (600 s). As well, details of source modifications necessitated for pulses in excess of 100 s are given. The newest test bed RADI is still being commissioned and only technical details of the test bed are included in this paper. The final topic of the paper is an investigation into the effects of biasing the plasma grid.

  9. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho

    2015-01-01

    Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  10. Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.

    PubMed

    Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B

    2015-09-01

    Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.

  11. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  12. Acoustical source reconstruction from non-synchronous sequential measurements by Fast Iterative Shrinkage Thresholding Algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang

    2017-11-01

    Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.

  13. Fast in-memory elastic full-waveform inversion using consumer-grade GPUs

    NASA Astrophysics Data System (ADS)

    Sivertsen Bergslid, Tore; Birger Raknes, Espen; Arntsen, Børge

    2017-04-01

    Full-waveform inversion (FWI) is a technique to estimate subsurface properties by using the recorded waveform produced by a seismic source and applying inverse theory. This is done through an iterative optimization procedure, where each iteration requires solving the wave equation many times, then trying to minimize the difference between the modeled and the measured seismic data. Having to model many of these seismic sources per iteration means that this is a highly computationally demanding procedure, which usually involves writing a lot of data to disk. We have written code that does forward modeling and inversion entirely in memory. A typical HPC cluster has many more CPUs than GPUs. Since FWI involves modeling many seismic sources per iteration, the obvious approach is to parallelize the code on a source-by-source basis, where each core of the CPU performs one modeling, and do all modelings simultaneously. With this approach, the GPU is already at a major disadvantage in pure numbers. Fortunately, GPUs can more than make up for this hardware disadvantage by performing each modeling much faster than a CPU. Another benefit of parallelizing each individual modeling is that it lets each modeling use a lot more RAM. If one node has 128 GB of RAM and 20 CPU cores, each modeling can use only 6.4 GB RAM if one is running the node at full capacity with source-by-source parallelization on the CPU. A parallelized per-source code using GPUs can use 64 GB RAM per modeling. Whenever a modeling uses more RAM than is available and has to start using regular disk space the runtime increases dramatically, due to slow file I/O. The extremely high computational speed of the GPUs combined with the large amount of RAM available for each modeling lets us do high frequency FWI for fairly large models very quickly. For a single modeling, our GPU code outperforms the single-threaded CPU-code by a factor of about 75. Successful inversions have been run on data with frequencies up to 40 Hz for a model of 2001 by 600 grid points with 5 m grid spacing and 5000 time steps, in less than 2.5 minutes per source. In practice, using 15 nodes (30 GPUs) to model 101 sources, each iteration took approximately 9 minutes. For reference, the same inversion run with our CPU code uses two hours per iteration. This was done using only a very simple wavefield interpolation technique, saving every second timestep. Using a more sophisticated checkpointing or wavefield reconstruction method would allow us to increase this model size significantly. Our results show that ordinary gaming GPUs are a viable alternative to the expensive professional GPUs often used today, when performing large scale modeling and inversion in geophysics.

  14. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  15. Generalized reference fields and source interpolation for the difference formulation of radiation transport

    NASA Astrophysics Data System (ADS)

    Luu, Thomas; Brooks, Eugene D.; Szőke, Abraham

    2010-03-01

    In the difference formulation for the transport of thermally emitted photons the photon intensity is defined relative to a reference field, the black body at the local material temperature. This choice of reference field combines the separate emission and absorption terms that nearly cancel, thereby removing the dominant cause of noise in the Monte Carlo solution of thick systems, but introduces time and space derivative source terms that cannot be determined until the end of the time step. The space derivative source term can also lead to noise induced crashes under certain conditions where the real physical photon intensity differs strongly from a black body at the local material temperature. In this paper, we consider a difference formulation relative to the material temperature at the beginning of the time step, or in cases where an alternative temperature better describes the radiation field, that temperature. The result is a method where iterative solution of the material energy equation is efficient and noise induced crashes are avoided. We couple our generalized reference field scheme with an ad hoc interpolation of the space derivative source, resulting in an algorithm that produces the correct flux between zones as the physical system approaches the thick limit.

  16. Iterative Correction of Reference Nucleotides (iCORN) using second generation sequencing technology.

    PubMed

    Otto, Thomas D; Sanders, Mandy; Berriman, Matthew; Newbold, Chris

    2010-07-15

    The accuracy of reference genomes is important for downstream analysis but a low error rate requires expensive manual interrogation of the sequence. Here, we describe a novel algorithm (Iterative Correction of Reference Nucleotides) that iteratively aligns deep coverage of short sequencing reads to correct errors in reference genome sequences and evaluate their accuracy. Using Plasmodium falciparum (81% A + T content) as an extreme example, we show that the algorithm is highly accurate and corrects over 2000 errors in the reference sequence. We give examples of its application to numerous other eukaryotic and prokaryotic genomes and suggest additional applications. The software is available at http://icorn.sourceforge.net

  17. Use of sediment source fingerprinting to assess the role of subsurface erosion in the supply of fine sediment in a degraded catchment in the Eastern Cape, South Africa.

    PubMed

    Manjoro, Munyaradzi; Rowntree, Kate; Kakembo, Vincent; Foster, Ian; Collins, Adrian L

    2017-06-01

    Sediment source fingerprinting has been successfully deployed to provide information on the surface and subsurface sources of sediment in many catchments around the world. However, there is still scope to re-examine some of the major assumptions of the technique with reference to the number of fingerprint properties used in the model, the number of model iterations and the potential uncertainties of using more than one sediment core collected from the same floodplain sink. We investigated the role of subsurface erosion in the supply of fine sediment to two sediment cores collected from a floodplain in a small degraded catchment in the Eastern Cape, South Africa. The results showed that increasing the number of individual fingerprint properties in the composite signature did not improve the model goodness-of-fit. This is still a much debated issue in sediment source fingerprinting. To test the goodness-of-fit further, the number of model repeat iterations was increased from 5000 to 30,000. However, this did not reduce uncertainty ranges in modelled source proportions nor improve the model goodness-of-fit. The estimated sediment source contributions were not consistent with the available published data on erosion processes in the study catchment. The temporal pattern of sediment source contributions predicted for the two sediment cores was very different despite the cores being collected in close proximity from the same floodplain. This highlights some of the potential limitations associated with using floodplain cores to reconstruct catchment erosion processes and associated sediment source contributions. For the source tracing approach in general, the findings here suggest the need for further investigations into uncertainties related to the number of fingerprint properties included in un-mixing models. The findings support the current widespread use of ≤5000 model repeat iterations for estimating the key sources of sediment samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams

    NASA Astrophysics Data System (ADS)

    Zhong, Xu; Kealy, Allison; Duckham, Matt

    2016-05-01

    Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.

  19. LDPC-based iterative joint source-channel decoding for JPEG2000.

    PubMed

    Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane

    2007-02-01

    A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.

  20. Design of a cavity ring-down spectroscopy diagnostic for negative ion rf source SPIDER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasqualotto, R.; Alfier, A.; Lotto, L.

    2010-10-15

    The rf source test facility SPIDER will test and optimize the source of the 1 MV neutral beam injection systems for ITER. Cavity ring-down spectroscopy (CRDS) will measure the absolute line-of-sight integrated density of negative (H{sup -} and D{sup -}) ions, produced in the extraction region of the source. CRDS takes advantage of the photodetachment process: negative ions are converted to neutral hydrogen atoms by electron stripping through absorption of a photon from a laser. The design of this diagnostic is presented with the corresponding simulation of the expected performance. A prototype operated without plasma has provided CRDS reference signals,more » design validation, and results concerning the signal-to-noise ratio.« less

  1. Source apportionment for fine particulate matter in a Chinese city using an improved gas-constrained method and comparison with multiple receptor models.

    PubMed

    Shi, Guoliang; Liu, Jiayuan; Wang, Haiting; Tian, Yingze; Wen, Jie; Shi, Xurong; Feng, Yinchang; Ivey, Cesunica E; Russell, Armistead G

    2018-02-01

    PM 2.5 is one of the most studied atmospheric pollutants due to its adverse impacts on human health and welfare and the environment. An improved model (the chemical mass balance gas constraint-Iteration: CMBGC-Iteration) is proposed and applied to identify source categories and estimate source contributions of PM 2.5. The CMBGC-Iteration model uses the ratio of gases to PM as constraints and considers the uncertainties of source profiles and receptor datasets, which is crucial information for source apportionment. To apply this model, samples of PM 2.5 were collected at Tianjin, a megacity in northern China. The ambient PM 2.5 dataset, source information, and gas-to-particle ratios (such as SO 2 /PM 2.5 , CO/PM 2.5 , and NOx/PM 2.5 ratios) were introduced into the CMBGC-Iteration to identify the potential sources and their contributions. Six source categories were identified by this model and the order based on their contributions to PM 2.5 was as follows: secondary sources (30%), crustal dust (25%), vehicle exhaust (16%), coal combustion (13%), SOC (7.6%), and cement dust (0.40%). In addition, the same dataset was also calculated by other receptor models (CMB, CMB-Iteration, CMB-GC, PMF, WALSPMF, and NCAPCA), and the results obtained were compared. Ensemble-average source impacts were calculated based on the seven source apportionment results: contributions of secondary sources (28%), crustal dust (20%), coal combustion (18%), vehicle exhaust (17%), SOC (11%), and cement dust (1.3%). The similar results of CMBGC-Iteration and ensemble method indicated that CMBGC-Iteration can produce relatively appropriate results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. An iterative method for the localization of a neutron source in a large box (container)

    NASA Astrophysics Data System (ADS)

    Dubinski, S.; Presler, O.; Alfassi, Z. B.

    2007-12-01

    The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.

  3. Characterization of the ITER model negative ion source during long pulse operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemsworth, R.S.; Boilson, D.; Crowley, B.

    2006-03-15

    It is foreseen to operate the neutral beam system of the International Thermonuclear Experimental Reactor (ITER) for pulse lengths extending up to 1 h. The performance of the KAMABOKO III negative ion source, which is a model of the source designed for ITER, is being studied on the MANTIS test bed at Cadarache. This article reports the latest results from the characterization of the ion source, in particular electron energy distribution measurements and the comparison between positive ion and negative ion extraction from the source.

  4. In vitro evaluation of a new iterative reconstruction algorithm for dose reduction in coronary artery calcium scoring

    PubMed Central

    Allmendinger, Thomas; Kunz, Andreas S; Veyhl-Wichmann, Maike; Ergün, Süleyman; Bley, Thorsten A; Petritsch, Bernhard

    2017-01-01

    Background Coronary artery calcium (CAC) scoring is a widespread tool for cardiac risk assessment in asymptomatic patients and accompanying possible adverse effects, i.e. radiation exposure, should be as low as reasonably achievable. Purpose To evaluate a new iterative reconstruction (IR) algorithm for dose reduction of in vitro coronary artery calcium scoring at different tube currents. Material and Methods An anthropomorphic calcium scoring phantom was scanned in different configurations simulating slim, average-sized, and large patients. A standard calcium scoring protocol was performed on a third-generation dual-source CT at 120 kVp tube voltage. Reference tube current was 80 mAs as standard and stepwise reduced to 60, 40, 20, and 10 mAs. Images were reconstructed with weighted filtered back projection (wFBP) and a new version of an established IR kernel at different strength levels. Calcifications were quantified calculating Agatston and volume scores. Subjective image quality was visualized with scans of an ex vivo human heart. Results In general, Agatston and volume scores remained relatively stable between 80 and 40 mAs and increased at lower tube currents, particularly in the medium and large phantom. IR reduced this effect, as both Agatston and volume scores decreased with increasing levels of IR compared to wFBP (P < 0.001). Depending on selected parameters, radiation dose could be lowered by up to 86% in the large size phantom when selecting a reference tube current of 10 mAs with resulting Agatston levels close to the reference settings. Conclusion New iterative reconstruction kernels may allow for reduction in tube current for established Agatston scoring protocols and consequently for substantial reduction in radiation exposure. PMID:28607763

  5. Status of the 1 MeV Accelerator Design for ITER NBI

    NASA Astrophysics Data System (ADS)

    Kuriyama, M.; Boilson, D.; Hemsworth, R.; Svensson, L.; Graceffa, J.; Schunke, B.; Decamps, H.; Tanaka, M.; Bonicelli, T.; Masiello, A.; Bigi, M.; Chitarin, G.; Luchetta, A.; Marcuzzi, D.; Pasqualotto, R.; Pomaro, N.; Serianni, G.; Sonato, P.; Toigo, V.; Zaccaria, P.; Kraus, W.; Franzen, P.; Heinemann, B.; Inoue, T.; Watanabe, K.; Kashiwagi, M.; Taniguchi, M.; Tobari, H.; De Esch, H.

    2011-09-01

    The beam source of neutral beam heating/current drive system for ITER is needed to accelerate the negative ion beam of 40A with D- at 1 MeV for 3600 sec. In order to realize the beam source, design and R&D works are being developed in many institutions under the coordination of ITER organization. The development of the key issues of the ion source including source plasma uniformity, suppression of co-extracted electron in D beam operation and also after the long beam duration time of over a few 100 sec, is progressed mainly in IPP with the facilities of BATMAN, MANITU and RADI. In the near future, ELISE, that will be tested the half size of the ITER ion source, will start the operation in 2011, and then SPIDER, which demonstrates negative ion production and extraction with the same size and same structure as the ITER ion source, will start the operation in 2014 as part of the NBTF. The development of the accelerator is progressed mainly in JAEA with the MeV test facility, and also the computer simulation of beam optics also developed in JAEA, CEA and RFX. The full ITER heating and current drive beam performance will be demonstrated in MITICA, which will start operation in 2016 as part of the NBTF.

  6. On the assessment of spatial resolution of PET systems with iterative image reconstruction

    NASA Astrophysics Data System (ADS)

    Gong, Kuang; Cherry, Simon R.; Qi, Jinyi

    2016-03-01

    Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.

  7. The ITER Neutral Beam Test Facility towards SPIDER operation

    NASA Astrophysics Data System (ADS)

    Toigo, V.; Dal Bello, S.; Gaio, E.; Luchetta, A.; Pasqualotto, R.; Zaccaria, P.; Bigi, M.; Chitarin, G.; Marcuzzi, D.; Pomaro, N.; Serianni, G.; Agostinetti, P.; Agostini, M.; Antoni, V.; Aprile, D.; Baltador, C.; Barbisan, M.; Battistella, M.; Boldrin, M.; Brombin, M.; Dalla Palma, M.; De Lorenzi, A.; Delogu, R.; De Muri, M.; Fellin, F.; Ferro, A.; Gambetta, G.; Grando, L.; Jain, P.; Maistrello, A.; Manduchi, G.; Marconato, N.; Pavei, M.; Peruzzo, S.; Pilan, N.; Pimazzoni, A.; Piovan, R.; Recchia, M.; Rizzolo, A.; Sartori, E.; Siragusa, M.; Spada, E.; Spagnolo, S.; Spolaore, M.; Taliercio, C.; Valente, M.; Veltri, P.; Zamengo, A.; Zaniol, B.; Zanotto, L.; Zaupa, M.; Boilson, D.; Graceffa, J.; Svensson, L.; Schunke, B.; Decamps, H.; Urbani, M.; Kushwah, M.; Chareyre, J.; Singh, M.; Bonicelli, T.; Agarici, G.; Garbuglia, A.; Masiello, A.; Paolucci, F.; Simon, M.; Bailly-Maitre, L.; Bragulat, E.; Gomez, G.; Gutierrez, D.; Mico, G.; Moreno, J.-F.; Pilard, V.; Chakraborty, A.; Baruah, U.; Rotti, C.; Patel, H.; Nagaraju, M. V.; Singh, N. P.; Patel, A.; Dhola, H.; Raval, B.; Fantz, U.; Fröschle, M.; Heinemann, B.; Kraus, W.; Nocentini, R.; Riedl, R.; Schiesko, L.; Wimmer, C.; Wünderlich, D.; Cavenago, M.; Croci, G.; Gorini, G.; Rebai, M.; Muraro, A.; Tardocchi, M.; Hemsworth, R.

    2017-08-01

    SPIDER is one of two projects of the ITER Neutral Beam Test Facility under construction in Padova, Italy, at the Consorzio RFX premises. It will have a 100 keV beam source with a full-size prototype of the radiofrequency ion source for the ITER neutral beam injector (NBI) and also, similar to the ITER diagnostic neutral beam, it is designed to operate with a pulse length of up to 3600 s, featuring an ITER-like magnetic filter field configuration (for high extraction of negative ions) and caesium oven (for high production of negative ions) layout as well as a wide set of diagnostics. These features will allow a reproduction of the ion source operation in ITER, which cannot be done in any other existing test facility. SPIDER realization is well advanced and the first operation is expected at the beginning of 2018, with the mission of achieving the ITER heating and diagnostic NBI ion source requirements and of improving its performance in terms of reliability and availability. This paper mainly focuses on the preparation of the first SPIDER operations—integration and testing of SPIDER components, completion and implementation of diagnostics and control and formulation of operation and research plan, based on a staged strategy.

  8. Dose reduction in abdominal computed tomography: intraindividual comparison of image quality of full-dose standard and half-dose iterative reconstructions with dual-source computed tomography.

    PubMed

    May, Matthias S; Wüst, Wolfgang; Brand, Michael; Stahl, Christian; Allmendinger, Thomas; Schmidt, Bernhard; Uder, Michael; Lell, Michael M

    2011-07-01

    We sought to evaluate the image quality of iterative reconstruction in image space (IRIS) in half-dose (HD) datasets compared with full-dose (FD) and HD filtered back projection (FBP) reconstruction in abdominal computed tomography (CT). To acquire data with FD and HD simultaneously, contrast-enhanced abdominal CT was performed with a dual-source CT system, both tubes operating at 120 kV, 100 ref.mAs, and pitch 0.8. Three different image datasets were reconstructed from the raw data: Standard FD images applying FBP which served as reference, HD images applying FBP and HD images applying IRIS. For the HD data sets, only data from 1 tube detector-system was used. Quantitative image quality analysis was performed by measuring image noise in tissue and air. Qualitative image quality was evaluated according to the European Guidelines on Quality criteria for CT. Additional assessment of artifacts, lesion conspicuity, and edge sharpness was performed. : Image noise in soft tissue was substantially decreased in HD-IRIS (-3.4 HU, -22%) and increased in HD-FBP (+6.2 HU, +39%) images when compared with the reference (mean noise, 15.9 HU). No significant differences between the FD-FBP and HD-IRIS images were found for the visually sharp anatomic reproduction, overall diagnostic acceptability (P = 0.923), lesion conspicuity (P = 0.592), and edge sharpness (P = 0.589), while HD-FBP was rated inferior. Streak artifacts and beam hardening was significantly more prominent in HD-FBP while HD-IRIS images exhibited a slightly different noise pattern. Direct intrapatient comparison of standard FD body protocols and HD-IRIS reconstruction suggest that the latest iterative reconstruction algorithms allow for approximately 50% dose reduction without deterioration of the high image quality necessary for confident diagnosis.

  9. Influence of Ultra-Low-Dose and Iterative Reconstructions on the Visualization of Orbital Soft Tissues on Maxillofacial CT.

    PubMed

    Widmann, G; Juranek, D; Waldenberger, F; Schullian, P; Dennhardt, A; Hoermann, R; Steurer, M; Gassner, E-M; Puelacher, W

    2017-08-01

    Dose reduction on CT scans for surgical planning and postoperative evaluation of midface and orbital fractures is an important concern. The purpose of this study was to evaluate the variability of various low-dose and iterative reconstruction techniques on the visualization of orbital soft tissues. Contrast-to-noise ratios of the optic nerve and inferior rectus muscle and subjective scores of a human cadaver were calculated from CT with a reference dose protocol (CT dose index volume = 36.69 mGy) and a subsequent series of low-dose protocols (LDPs I-4: CT dose index volume = 4.18, 2.64, 0.99, and 0.53 mGy) with filtered back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR)-50, ASIR-100, and model-based iterative reconstruction. The Dunn Multiple Comparison Test was used to compare each combination of protocols (α = .05). Compared with the reference dose protocol with FBP, the following statistically significant differences in contrast-to-noise ratios were shown (all, P ≤ .012) for the following: 1) optic nerve: LDP-I with FBP; LDP-II with FBP and ASIR-50; LDP-III with FBP, ASIR-50, and ASIR-100; and LDP-IV with FBP, ASIR-50, and ASIR-100; and 2) inferior rectus muscle: LDP-II with FBP, LDP-III with FBP and ASIR-50, and LDP-IV with FBP, ASIR-50, and ASIR-100. Model-based iterative reconstruction showed the best contrast-to-noise ratio in all images and provided similar subjective scores for LDP-II. ASIR-50 had no remarkable effect, and ASIR-100, a small effect on subjective scores. Compared with a reference dose protocol with FBP, model-based iterative reconstruction may show similar diagnostic visibility of orbital soft tissues at a CT dose index volume of 2.64 mGy. Low-dose technology and iterative reconstruction technology may redefine current reference dose levels in maxillofacial CT. © 2017 by American Journal of Neuroradiology.

  10. Automatic reference selection for quantitative EEG interpretation: identification of diffuse/localised activity and the active earlobe reference, iterative detection of the distribution of EEG rhythms.

    PubMed

    Wang, Bei; Wang, Xingyu; Ikeda, Akio; Nagamine, Takashi; Shibasaki, Hiroshi; Nakamura, Masatoshi

    2014-01-01

    EEG (Electroencephalograph) interpretation is important for the diagnosis of neurological disorders. The proper adjustment of the montage can highlight the EEG rhythm of interest and avoid false interpretation. The aim of this study was to develop an automatic reference selection method to identify a suitable reference. The results may contribute to the accurate inspection of the distribution of EEG rhythms for quantitative EEG interpretation. The method includes two pre-judgements and one iterative detection module. The diffuse case is initially identified by pre-judgement 1 when intermittent rhythmic waveforms occur over large areas along the scalp. The earlobe reference or averaged reference is adopted for the diffuse case due to the effect of the earlobe reference depending on pre-judgement 2. An iterative detection algorithm is developed for the localised case when the signal is distributed in a small area of the brain. The suitable averaged reference is finally determined based on the detected focal and distributed electrodes. The presented technique was applied to the pathological EEG recordings of nine patients. One example of the diffuse case is introduced by illustrating the results of the pre-judgements. The diffusely intermittent rhythmic slow wave is identified. The effect of active earlobe reference is analysed. Two examples of the localised case are presented, indicating the results of the iterative detection module. The focal and distributed electrodes are detected automatically during the repeating algorithm. The identification of diffuse and localised activity was satisfactory compared with the visual inspection. The EEG rhythm of interest can be highlighted using a suitable selected reference. The implementation of an automatic reference selection method is helpful to detect the distribution of an EEG rhythm, which can improve the accuracy of EEG interpretation during both visual inspection and automatic interpretation. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Integrated Collaborative Model in Research and Education with Emphasis on Small Satellite Technology

    DTIC Science & Technology

    1996-01-01

    feedback; the number of iterations in a complete iteration is referred to as loop depth or iteration depth, g (i). A data packet or packet is data...loop depth, g (i)) is either a finite (constant or variable) or an infinite value. 1) Finite loop depth, variable number of iterations Some problems...design time. The time needed for the first packet to leave and a new initial data to be introduced to the iteration is min(R * ( g (k) * (N+I) + k-1

  12. Millstone: software for multiplex microbial genome analysis and engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodman, Daniel B.; Kuznetsov, Gleb; Lajoie, Marc J.

    Inexpensive DNA sequencing and advances in genome editing have made computational analysis a major rate-limiting step in adaptive laboratory evolution and microbial genome engineering. Here, we describe Millstone, a web-based platform that automates genotype comparison and visualization for projects with up to hundreds of genomic samples. To enable iterative genome engineering, Millstone allows users to design oligonucleotide libraries and create successive versions of reference genomes. Millstone is open source and easily deployable to a cloud platform, local cluster, or desktop, making it a scalable solution for any lab.

  13. Millstone: software for multiplex microbial genome analysis and engineering.

    PubMed

    Goodman, Daniel B; Kuznetsov, Gleb; Lajoie, Marc J; Ahern, Brian W; Napolitano, Michael G; Chen, Kevin Y; Chen, Changping; Church, George M

    2017-05-25

    Inexpensive DNA sequencing and advances in genome editing have made computational analysis a major rate-limiting step in adaptive laboratory evolution and microbial genome engineering. We describe Millstone, a web-based platform that automates genotype comparison and visualization for projects with up to hundreds of genomic samples. To enable iterative genome engineering, Millstone allows users to design oligonucleotide libraries and create successive versions of reference genomes. Millstone is open source and easily deployable to a cloud platform, local cluster, or desktop, making it a scalable solution for any lab.

  14. Millstone: software for multiplex microbial genome analysis and engineering

    DOE PAGES

    Goodman, Daniel B.; Kuznetsov, Gleb; Lajoie, Marc J.; ...

    2017-05-25

    Inexpensive DNA sequencing and advances in genome editing have made computational analysis a major rate-limiting step in adaptive laboratory evolution and microbial genome engineering. Here, we describe Millstone, a web-based platform that automates genotype comparison and visualization for projects with up to hundreds of genomic samples. To enable iterative genome engineering, Millstone allows users to design oligonucleotide libraries and create successive versions of reference genomes. Millstone is open source and easily deployable to a cloud platform, local cluster, or desktop, making it a scalable solution for any lab.

  15. Rest-wavelength fiducials for the ITER core imaging x-ray spectrometer.

    PubMed

    Beiersdorfer, P; Brown, G V; Graf, A T; Bitter, M; Hill, K W; Kelley, R L; Kilbourne, C A; Leutenegger, M A; Porter, F S

    2012-10-01

    Absolute wavelength references are needed to derive the plasma velocities from the Doppler shift of a given line emitted by a moving plasma. We show that such reference standards exist for the strongest x-ray line in neonlike W(64+), which has become the line of choice for the ITER (Latin "the way") core imaging x-ray spectrometer. Close-by standards are the Hf Lβ(3) line and the Ir Lα(2) line, which bracket the W(64+) line by ±30 eV; other standards are given by the Ir Lα(1) and Lα(2) lines and the Hf Lβ(1) and Lβ(2) lines, which bracket the W(64+) line by ±40 and ±160 eV, respectively. The reference standards can be produced by an x-ray tube built into the ITER spectrometer. We present spectra of the reference lines obtained with an x-ray microcalorimeter and compare them to spectra of the W(64+) line obtained both with an x-ray microcalorimeter and a crystal spectrometer.

  16. Rest-wavelength Fiducials for the ITER Core Imaging X-ray Spectrometer

    NASA Technical Reports Server (NTRS)

    Beiersdorfer, P.; Brown, G. V.; Graf, A. T.; Bitter, M.; Hill, K. W.; Kelley, R. L.; Kilbourne, C. A.; Leutenegger, M. A.; Porter, F. S.

    2012-01-01

    Absolute wavelength references are needed to derive the plasma velocities from the Doppler shift of a given line emitted by a moving plasma. We show that such reference standards exist for the strongest x-ray line in neonlike W64+, which has become the line of choice for the ITER (Latin the way) core imaging x-ray spectrometer. Close-by standards are the Hf L3 line and the Ir L2 line, which bracket the W64+ line by 30 eV; other standards are given by the Ir L1 and L2 lines and the Hf L1 and L2 lines, which bracket the W64+ line by 40 and 160 eV, respectively. The reference standards can be produced by an x-ray tube built into the ITER spectrometer. We present spectra of the reference lines obtained with an x-ray microcalorimeter and compare them to spectra of the W64+ line obtained both with an x-ray microcalorimeter and a crystal spectrometer

  17. Investigation of the Iterative Phase Retrieval Algorithm for Interferometric Applications

    NASA Astrophysics Data System (ADS)

    Gombkötő, Balázs; Kornis, János

    2010-04-01

    Sequentially recorded intensity patterns reflected from a coherently illuminated diffuse object can be used to reconstruct the complex amplitude of the scattered beam. Several iterative phase retrieval algorithms are known in the literature to obtain the initially unknown phase from these longitudinally displaced intensity patterns. When two sequences are recorded in two different states of a centimeter sized object in optical setups that are similar to digital holographic interferometry-but omitting the reference wave-, displacement, deformation, or shape measurement is theoretically possible. To do this, the retrieved phase pattern should contain information not only about the intensities and locations of the point sources of the object surface, but their relative phase as well. Not only experiments require strict mechanical precision to record useful data, but even in simulations several parameters influence the capabilities of iterative phase retrieval, such as object to camera distance range, uniform or varying camera step sequence, speckle field characteristics, and sampling. Experiments were done to demonstrate this principle with an as large as 5×5 cm sized deformable object as well. Good initial results were obtained in an imaging setup, where the intensity pattern sequences were recorded near the image plane.

  18. Adaptive Cross-correlation Algorithm and Experiment of Extended Scene Shack-Hartmann Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Morgan, Rhonda M.; Green, Joseph J.; Ohara, Catherine M.; Redding, David C.

    2007-01-01

    We have developed a new, adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels in two extended-scene images captured by a Shack-Hartmann wavefront sensor (SH-WFS). It determines the positions of all of the extended-scene image cells relative to a reference cell using an FFT-based iterative image shifting algorithm. It works with both point-source spot images as well as extended scene images. We have also set up a testbed for extended0scene SH-WFS, and tested the ACC algorithm with the measured data of both point-source and extended-scene images. In this paper we describe our algorithm and present out experimental results.

  19. Iterative algorithm for joint zero diagonalization with application in blind source separation.

    PubMed

    Zhang, Wei-Tao; Lou, Shun-Tian

    2011-07-01

    A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.

  20. Variational Iterative Refinement Source Term Estimation Algorithm Assessment for Rural and Urban Environments

    NASA Astrophysics Data System (ADS)

    Delle Monache, L.; Rodriguez, L. M.; Meech, S.; Hahn, D.; Betancourt, T.; Steinhoff, D.

    2016-12-01

    It is necessary to accurately estimate the initial source characteristics in the event of an accidental or intentional release of a Chemical, Biological, Radiological, or Nuclear (CBRN) agent into the atmosphere. The accurate estimation of the source characteristics are important because many times they are unknown and the Atmospheric Transport and Dispersion (AT&D) models rely heavily on these estimates to create hazard assessments. To correctly assess the source characteristics in an operational environment where time is critical, the National Center for Atmospheric Research (NCAR) has developed a Source Term Estimation (STE) method, known as the Variational Iterative Refinement STE algorithm (VIRSA). VIRSA consists of a combination of modeling systems. These systems include an AT&D model, its corresponding STE model, a Hybrid Lagrangian-Eulerian Plume Model (H-LEPM), and its mathematical adjoint model. In an operational scenario where we have information regarding the infrastructure of a city, the AT&D model used is the Urban Dispersion Model (UDM) and when using this model in VIRSA we refer to the system as uVIRSA. In all other scenarios where we do not have the city infrastructure information readily available, the AT&D model used is the Second-order Closure Integrated PUFF model (SCIPUFF) and the system is referred to as sVIRSA. VIRSA was originally developed using SCIPUFF 2.4 for the Defense Threat Reduction Agency and integrated into the Hazard Prediction and Assessment Capability and Joint Program for Information Systems Joint Effects Model. The results discussed here are the verification and validation of the upgraded system with SCIPUFF 3.0 and the newly implemented UDM capability. To verify uVIRSA and sVIRSA, synthetic concentration observation scenarios were created in urban and rural environments and the results of this verification are shown. Finally, we validate the STE performance of uVIRSA using scenarios from the Joint Urban 2003 (JU03) experiment, which was held in Oklahoma City and also validate the performance of sVIRSA using scenarios from the FUsing Sensor Integrated Observing Network (FUSION) Field Trial 2007 (FFT07), held at Dugway Proving Grounds in rural Utah.

  1. FENDL: International reference nuclear data library for fusion applications

    NASA Astrophysics Data System (ADS)

    Pashchenko, A. B.; Wienke, H.; Ganesan, S.

    1996-10-01

    The IAEA Nuclear Data Section, in co-operation with several national nuclear data centres and research groups, has created the first version of an internationally available Fusion Evaluated Nuclear Data Library (FENDL-1). The FENDL library has been selected to serve as a comprehensive source of processed and tested nuclear data tailored to the requirements of the engineering design activity (EDA) of the ITER project and other fusion-related development projects. The present version of FENDL consists of the following sublibraries covering the necessary nuclear input for all physics and engineering aspects of the material development, design, operation and safety of the ITER project in its current EDA phase: FENDL/A-1.1: neutron activation cross-sections, selected from different available sources, for 636 nuclides, FENDL/D-1.0: nuclear decay data for 2900 nuclides in ENDF-6 format, FENDL/DS-1.0: neutron activation data for dosimetry by foil activation, FENDL/C-1.0: data for the fusion reactions D(d,n), D(d,p), T(d,n), T(t,2n), He-3(d,p) extracted from ENDF/B-6 and processed, FENDL/E-1.0:data for coupled neutron—photon transport calculations, including a data library for neutron interaction and photon production for 63 elements or isotopes, selected from ENDF/B-6, JENDL-3, or BROND-2, and a photon—atom interaction data library for 34 elements. The benchmark validation of FENDL-1 as required by the customer, i.e. the ITER team, is considered to be a task of high priority in the coming months. The well tested and validated nuclear data libraries in processed form of the FENDL-2 are expected to be ready by mid 1996 for use by the ITER team in the final phase of ITER EDA after extensive benchmarking and integral validation studies in the 1995-1996 period. The FENDL data files can be electronically transferred to users from the IAEA nuclear data section online system through INTERNET. A grand total of 54 (sub)directories with 845 files with total size of about 2 million blocks or about 1 Gigabyte (1 block = 512 bytes) of numerical data is currently available on-line.

  2. PROGRESS TOWARDS NEXT GENERATION, WAVEFORM BASED THREE-DIMENSIONAL MODELS AND METRICS TO IMPROVE NUCLEAR EXPLOSION MONITORING IN THE MIDDLE EAST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savage, B; Peter, D; Covellone, B

    2009-07-02

    Efforts to update current wave speed models of the Middle East require a thoroughly tested database of sources and recordings. Recordings of seismic waves traversing the region from Tibet to the Red Sea will be the principal metric in guiding improvements to the current wave speed model. Precise characterizations of the earthquakes, specifically depths and faulting mechanisms, are essential to avoid mapping source errors into the refined wave speed model. Errors associated with the source are manifested in amplitude and phase changes. Source depths and paths near nodal planes are particularly error prone as small changes may severely affect themore » resulting wavefield. Once sources are quantified, regions requiring refinement will be highlighted using adjoint tomography methods based on spectral element simulations [Komatitsch and Tromp (1999)]. An initial database of 250 regional Middle Eastern events from 1990-2007, was inverted for depth and focal mechanism using teleseismic arrivals [Kikuchi and Kanamori (1982)] and regional surface and body waves [Zhao and Helmberger (1994)]. From this initial database, we reinterpreted a large, well recorded subset of 201 events through a direct comparison between data and synthetics based upon a centroid moment tensor inversion [Liu et al. (2004)]. Evaluation was done using both a 1D reference model [Dziewonski and Anderson (1981)] at periods greater than 80 seconds and a 3D model [Kustowski et al. (2008)] at periods of 25 seconds and longer. The final source reinterpretations will be within the 3D model, as this is the initial starting point for the adjoint tomography. Transitioning from a 1D to 3D wave speed model shows dramatic improvements when comparisons are done at shorter periods, (25 s). Synthetics from the 1D model were created through mode summations while those from the 3D simulations were created using the spectral element method. To further assess errors in source depth and focal mechanism, comparisons between the three methods were made. These comparisons help to identify problematic stations and sources which may bias the final solution. Estimates of standard errors were generated for each event's source depth and focal mechanism to identify poorly constrained events. A final, well characterized set of sources and stations will be then used to iteratively improve the wave speed model of the Middle East. After a few iterations during the adjoint inversion process, the sources will be reexamined and relocated to further reduce mapping of source errors into structural features. Finally, efforts continue in developing the infrastructure required to 'quickly' generate event kernels at the n-th iteration and invert for a new, (n+1)-th, wave speed model of the Middle East. While development of the infrastructure proceeds, initial tests using a limited number of events shows the 3D model, while showing vast improvement compared to the 1D model, still requires substantial modifications. Employing our new, full source set and iterating the adjoint inversions at successively shorter periods will lead to significant changes and refined wave speed structures of the Middle East.« less

  3. Low pressure and high power rf sources for negative hydrogen ions for fusion applications (ITER neutral beam injection).

    PubMed

    Fantz, U; Franzen, P; Kraus, W; Falter, H D; Berger, M; Christ-Koch, S; Fröschle, M; Gutser, R; Heinemann, B; Martens, C; McNeely, P; Riedl, R; Speth, E; Wünderlich, D

    2008-02-01

    The international fusion experiment ITER requires for the plasma heating and current drive a neutral beam injection system based on negative hydrogen ion sources at 0.3 Pa. The ion source must deliver a current of 40 A D(-) for up to 1 h with an accelerated current density of 200 Am/(2) and a ratio of coextracted electrons to ions below 1. The extraction area is 0.2 m(2) from an aperture array with an envelope of 1.5 x 0.6 m(2). A high power rf-driven negative ion source has been successfully developed at the Max-Planck Institute for Plasma Physics (IPP) at three test facilities in parallel. Current densities of 330 and 230 Am/(2) have been achieved for hydrogen and deuterium, respectively, at a pressure of 0.3 Pa and an electron/ion ratio below 1 for a small extraction area (0.007 m(2)) and short pulses (<4 s). In the long pulse experiment, equipped with an extraction area of 0.02 m(2), the pulse length has been extended to 3600 s. A large rf source, with the width and half the height of the ITER source but without extraction system, is intended to demonstrate the size scaling and plasma homogeneity of rf ion sources. The source operates routinely now. First results on plasma homogeneity obtained from optical emission spectroscopy and Langmuir probes are very promising. Based on the success of the IPP development program, the high power rf-driven negative ion source has been chosen recently for the ITER beam systems in the ITER design review process.

  4. Work function measurements during plasma exposition at conditions relevant in negative ion sources for the ITER neutral beam injection.

    PubMed

    Gutser, R; Wimmer, C; Fantz, U

    2011-02-01

    Cesium seeded sources for surface generated negative hydrogen ions are major components of neutral beam injection systems in future large-scale fusion experiments such as ITER. The stability and delivered current density depend highly on the work function during vacuum and plasma phases of the ion source. One of the most important quantities that affect the source performance is the work function. A modified photocurrent method was developed to measure the temporal behavior of the work function during and after cesium evaporation. The investigation of cesium exposed Mo and MoLa samples under ITER negative hydrogen ion based neutral beam injection relevant surface and plasma conditions showed the influence of impurities which result in a fast degradation when the plasma exposure or the cesium flux onto the sample is stopped. A minimum work function close to that of bulk cesium was obtained under the influence of the plasma exposition, while a significantly higher work function was observed under ITER-like vacuum conditions.

  5. Computation of nonlinear ultrasound fields using a linearized contrast source method.

    PubMed

    Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A

    2013-08-01

    Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.

  6. Automated Testcase Generation for Numerical Support Functions in Embedded Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Schnieder, Stefan-Alexander

    2014-01-01

    We present a tool for the automatic generation of test stimuli for small numerical support functions, e.g., code for trigonometric functions, quaternions, filters, or table lookup. Our tool is based on KLEE to produce a set of test stimuli for full path coverage. We use a method of iterative deepening over abstractions to deal with floating-point values. During actual testing the stimuli exercise the code against a reference implementation. We illustrate our approach with results of experiments with low-level trigonometric functions, interpolation routines, and mathematical support functions from an open source UAS autopilot.

  7. First results of the ITER-relevant negative ion beam test facility ELISE (invited).

    PubMed

    Fantz, U; Franzen, P; Heinemann, B; Wünderlich, D

    2014-02-01

    An important step in the European R&D roadmap towards the neutral beam heating systems of ITER is the new test facility ELISE (Extraction from a Large Ion Source Experiment) for large-scale extraction from a half-size ITER RF source. The test facility was constructed in the last years at Max-Planck-Institut für Plasmaphysik Garching and is now operational. ELISE is gaining early experience of the performance and operation of large RF-driven negative hydrogen ion sources with plasma illumination of a source area of 1 × 0.9 m(2) and an extraction area of 0.1 m(2) using 640 apertures. First results in volume operation, i.e., without caesium seeding, are presented.

  8. IEEE 1588 Time Synchronization Board in MTCA.4 Form Factor

    NASA Astrophysics Data System (ADS)

    Jabłoński, G.; Makowski, D.; Mielczarek, A.; Orlikowski, M.; Perek, P.; Napieralski, A.; Makijarvi, P.; Simrock, S.

    2015-06-01

    Distributed data acquisition and control systems in large-scale scientific experiments, like e.g. ITER, require time synchronization with nanosecond precision. A protocol commonly used for that purpose is the Precise Timing Protocol (PTP), also known as IEEE 1588 standard. It uses the standard Ethernet signalling and protocols and allows obtaining timing accuracy of the order of tens of nanoseconds. The MTCA.4 is gradually becoming the platform of choice for building such systems. Currently there is no commercially available implementation of the PTP receiver on that platform. In this paper, we present a module in the MTCA.4 form factor supporting this standard. The module may be used as a timing receiver providing reference clocks in an MTCA.4 chassis, generating a Pulse Per Second (PPS) signal and allowing generation of triggers and timestamping of events on 8 configurable backplane lines and two front panel connectors. The module is based on the Xilinx Spartan 6 FPGA and thermally stabilized Voltage Controlled Oscillator controlled by the digital-to-analog converter. The board supports standalone operation, without the support from the host operating system, as the entire control algorithm is run on a Microblaze CPU implemented in the FPGA. The software support for the card includes the low-level API in the form of Linux driver, user-mode library, high-level API: ITER Nominal Device Support and EPICS IOC. The device has been tested in the ITER timing distribution network (TCN) with three cascaded PTP-enabled Hirschmann switches and a GPS reference clock source. An RMS synchronization accuracy, measured by direct comparison of the PPS signals, better than 20 ns has been obtained.

  9. Threshold-driven optimization for reference-based auto-planning

    NASA Astrophysics Data System (ADS)

    Long, Troy; Chen, Mingli; Jiang, Steve; Lu, Weiguo

    2018-02-01

    We study threshold-driven optimization methodology for automatically generating a treatment plan that is motivated by a reference DVH for IMRT treatment planning. We present a framework for threshold-driven optimization for reference-based auto-planning (TORA). Commonly used voxel-based quadratic penalties have two components for penalizing under- and over-dosing of voxels: a reference dose threshold and associated penalty weight. Conventional manual- and auto-planning using such a function involves iteratively updating the preference weights while keeping the thresholds constant, an unintuitive and often inconsistent method for planning toward some reference DVH. However, driving a dose distribution by threshold values instead of preference weights can achieve similar plans with less computational effort. The proposed methodology spatially assigns reference DVH information to threshold values, and iteratively improves the quality of that assignment. The methodology effectively handles both sub-optimal and infeasible DVHs. TORA was applied to a prostate case and a liver case as a proof-of-concept. Reference DVHs were generated using a conventional voxel-based objective, then altered to be either infeasible or easy-to-achieve. TORA was able to closely recreate reference DVHs in 5-15 iterations of solving a simple convex sub-problem. TORA has the potential to be effective for auto-planning based on reference DVHs. As dose prediction and knowledge-based planning becomes more prevalent in the clinical setting, incorporating such data into the treatment planning model in a clear, efficient way will be crucial for automated planning. A threshold-focused objective tuning should be explored over conventional methods of updating preference weights for DVH-guided treatment planning.

  10. Feasibility of low-concentration iodinated contrast medium with lower-tube-voltage dual-source CT aortography using iterative reconstruction: comparison with automatic exposure control CT aortography.

    PubMed

    Shin, Hee Jeong; Kim, Song Soo; Lee, Jae-Hwan; Park, Jae-Hyeong; Jeong, Jin-Ok; Jin, Seon Ah; Shin, Byung Seok; Shin, Kyung-Sook; Ahn, Moonsang

    2016-06-01

    To evaluate the feasibility of low-concentration contrast medium (CM) for vascular enhancement, image quality, and radiation dose on computed tomography aortography (CTA) using a combined low-tube-voltage and iterative reconstruction (IR) technique. Ninety subjects underwent dual-source CT (DSCT) operating in dual-source, high-pitch mode. DSCT scans were performed using both high-concentration CM (Group A, n = 50; Iomeprol 400) and low-concentration CM (Group B, n = 40; Iodixanol 270). Group A was scanned using a reference tube potential of 120 kVp and 120 reference mAs under automatic exposure control with IR. Group B was scanned using low-tube-voltage (80 or 100 kVp if body mass index ≥25 kg/m(2)) at a fixed current of 150 mAs, along with IR. Images of the two groups were compared regarding attenuation, image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), iodine load, and radiation dose in various locations of the CTA. In comparison between Group A and Group B, the average mean attenuation (454.73 ± 86.66 vs. 515.96 ± 101.55 HU), SNR (25.28 ± 4.34 vs. 31.29 ± 4.58), and CNR (21.83 ± 4.20 vs. 27.55 ± 4.81) on CTA in Group B showed significantly greater values and significantly lower image noise values (18.76 ± 2.19 vs. 17.48 ± 3.34) than those in Group A (all Ps < 0.05). Homogeneous contrast enhancement from the ascending thoracic aorta to the infrarenal abdominal aorta was significantly superior in Group B (P < 0.05). Low-concentration CM and a low-tube-voltage combination technique using IR is a feasible method, showing sufficient contrast enhancement and image quality.

  11. Modeling activities on the negative-ion-based Neutral Beam Injectors of the Large Helical Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agostinetti, P.; Antoni, V.; Chitarin, G.

    2011-09-26

    At the National Institute for Fusion Science (NIFS) large-scaled negative ion sources have been widely used for the Neutral Beam Injectors (NBIs) mounted on the Large Helical Device (LHD), which is the world-largest superconducting helical system. These injectors have achieved outstanding performances in terms of beam energy, negative-ion current and optics, and represent a reference for the development of heating and current drive NBIs for ITER.In the framework of the support activities for the ITER NBIs, the PRIMA test facility, which includes a RF-drive ion source with 100 keV accelerator (SPIDER) and a complete 1 MeV Neutral Beam system (MITICA)more » is under construction at Consorzio RFX in Padova.An experimental validation of the codes has been undertaken in order to prove the accuracy of the simulations and the soundness of the SPIDER and MITICA design. To this purpose, the whole set of codes have been applied to the LHD NBIs in a joint activity between Consorzio RFX and NIFS, with the goal of comparing and benchmarking the codes with the experimental data. A description of these modeling activities and a discussion of the main results obtained are reported in this paper.« less

  12. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    PubMed

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  13. A Newton-Raphson Method Approach to Adjusting Multi-Source Solar Simulators

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Wolford, David S.

    2012-01-01

    NASA Glenn Research Center has been using an in house designed X25 based multi-source solar simulator since 2003. The simulator is set up for triple junction solar cells prior to measurements b y adjusting the three sources to produce the correct short circuit current, lsc, in each of three AM0 calibrated sub-cells. The past practice has been to adjust one source on one sub-cell at a time, iterating until all the sub-cells have the calibrated Isc. The new approach is to create a matrix of measured lsc for small source changes on each sub-cell. A matrix, A, is produced. This is normalized to unit changes in the sources so that Ax(delta)s = (delta)isc. This matrix can now be inverted and used with the known Isc differences from the AM0 calibrated values to indicate changes in the source settings, (delta)s = A ·'x.(delta)isc This approach is still an iterative one, but all sources are changed during each iteration step. It typically takes four to six steps to converge on the calibrated lsc values. Even though the source lamps may degrade over time, the initial matrix evaluation i s not performed each time, since measurement matrix needs to be only approximate. Because an iterative approach is used the method will still continue to be valid. This method may become more important as state-of-the-art solar cell junction responses overlap the sources of the simulator. Also, as the number of cell junctions and sources increase, this method should remain applicable.

  14. Imaging the Parasinus Region with a Third-Generation Dual-Source CT and the Effect of Tin Filtration on Image Quality and Radiation Dose.

    PubMed

    Lell, M M; May, M S; Brand, M; Eller, A; Buder, T; Hofmann, E; Uder, M; Wuest, W

    2015-07-01

    CT is the imaging technique of choice in the evaluation of midface trauma or inflammatory disease. We performed a systematic evaluation of scan protocols to optimize image quality and radiation exposure on third-generation dual-source CT. CT protocols with different tube voltage (70-150 kV), current (25-300 reference mAs), prefiltration, pitch value, and rotation time were systematically evaluated. All images were reconstructed with iterative reconstruction (Advanced Modeled Iterative Reconstruction, level 2). To individually compare results with otherwise identical factors, we obtained all scans on a frozen human head. Conebeam CT was performed for image quality and dose comparison with multidetector row CT. Delineation of important anatomic structures and incidental pathologic conditions in the cadaver head was evaluated. One hundred kilovolts with tin prefiltration demonstrated the best compromise between dose and image quality. The most dose-effective combination for trauma imaging was Sn100 kV/250 mAs (volume CT dose index, 2.02 mGy), and for preoperative sinus surgery planning, Sn100 kV/150 mAs (volume CT dose index, 1.22 mGy). "Sn" indicates an additional prefiltration of the x-ray beam with a tin filter to constrict the energy spectrum. Exclusion of sinonasal disease was possible with even a lower dose by using Sn100 kV/25 mAs (volume CT dose index, 0.2 mGy). High image quality at very low dose levels can be achieved by using a Sn100-kV protocol with iterative reconstruction. The effective dose is comparable with that of conventional radiography, and the high image quality at even lower radiation exposure favors multidetector row CT over conebeam CT. © 2015 by American Journal of Neuroradiology.

  15. Towards the Experimental Assessment of the DQE in SPECT Scanners

    NASA Astrophysics Data System (ADS)

    Fountos, G. P.; Michail, C. M.

    2017-11-01

    The purpose of this work was to introduce the Detective Quantum Efficiency (DQE) in single photon emission computed tomography (SPECT) systems using a flood source. A Tc-99m-based flood source (Eγ = 140 keV) consisting of a radiopharmaceutical solution of dithiothreitol (DTT, 10-3 M)/Tc-99m(III)-DMSA, 40 mCi/40 ml bound to the grains of an Agfa MammoRay HDR Medical X-ray film) was prepared in laboratory. The source was placed between two PMMA blocks and images were obtained by using the brain tomographic acquisition protocol (DatScan-brain). The Modulation Transfer Function (MTF) was evaluated using the Iterative 2D algorithm. All imaging experiments were performed in a Siemens e-Cam gamma camera. The Normalized Noise Power spectra (NNPS) were obtained from the sagittal views of the source. The higher MTF values were obtained for the Flash Iterative 2D with 24 iterations and 20 subsets. The noise levels of the SPECT reconstructed images, in terms of the NNPS, were found to increase as the number of iterations increase. The behavior of the DQE was influenced by both MTF and NNPS. As the number of iterations was increased, higher MTF values were obtained, however with a parallel, increase of magnitude in image noise, as depicted from the NNPS results. DQE values, which were influenced by both MTF and NNPS, were found higher when the number of iterations results in resolution saturation. The method presented here is novel and easy to implement, requiring materials commonly found in clinical practice and can be useful in the quality control of SPECT scanners.

  16. Diagnostics of the ITER neutral beam test facility.

    PubMed

    Pasqualotto, R; Serianni, G; Sonato, P; Agostini, M; Brombin, M; Croci, G; Dalla Palma, M; De Muri, M; Gazza, E; Gorini, G; Pomaro, N; Rizzolo, A; Spolaore, M; Zaniol, B

    2012-02-01

    The ITER heating neutral beam (HNB) injector, based on negative ions accelerated at 1 MV, will be tested and optimized in the SPIDER source and MITICA full injector prototypes, using a set of diagnostics not available on the ITER HNB. The RF source, where the H(-)∕D(-) production is enhanced by cesium evaporation, will be monitored with thermocouples, electrostatic probes, optical emission spectroscopy, cavity ring down, and laser absorption spectroscopy. The beam is analyzed by cooling water calorimetry, a short pulse instrumented calorimeter, beam emission spectroscopy, visible tomography, and neutron imaging. Design of the diagnostic systems is presented.

  17. Image transmission system using adaptive joint source and channel decoding

    NASA Astrophysics Data System (ADS)

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  18. Robust iterative closest point algorithm based on global reference point for rotation invariant registration.

    PubMed

    Du, Shaoyi; Xu, Yiting; Wan, Teng; Hu, Huaizhong; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao

    2017-01-01

    The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm.

  19. Robust iterative closest point algorithm based on global reference point for rotation invariant registration

    PubMed Central

    Du, Shaoyi; Xu, Yiting; Wan, Teng; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao

    2017-01-01

    The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm. PMID:29176780

  20. Understanding the Universal Right to Education as Jurisgenerative Politics and Democratic Iterations

    ERIC Educational Resources Information Center

    Wahlstrom, Ninni

    2009-01-01

    This article examines how the universal human right to education can be understood in terms of what Seyla Benhabib considers "democratic iterations". Further, by referring to the concept of jurisgenerative politics, Benhabib argues that a democratic people reinterpret guiding norms and principles which they find themselves bound to,…

  1. A method for reducing the largest relative errors in Monte Carlo iterated-fission-source calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, J. L.; Sutton, T. M.

    2013-07-01

    In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amountmore » of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)« less

  2. An iterative method for obtaining the optimum lightning location on a spherical surface

    NASA Technical Reports Server (NTRS)

    Chao, Gao; Qiming, MA

    1991-01-01

    A brief introduction to the basic principles of an eigen method used to obtain the optimum source location of lightning is presented. The location of the optimum source is obtained by using multiple direction finders (DF's) on a spherical surface. An improvement of this method, which takes the distance of source-DF's as a constant, is presented. It is pointed out that using a weight factor of signal strength is not the most ideal method because of the inexact inverse signal strength-distance relation and the inaccurate signal amplitude. An iterative calculation method is presented using the distance from the source to the DF as a weight factor. This improved method has higher accuracy and needs only a little more calculation time. Some computer simulations for a 4DF system are presented to show the improvement of location through use of the iterative method.

  3. Fourth-order numerical solutions of diffusion equation by using SOR method with Crank-Nicolson approach

    NASA Astrophysics Data System (ADS)

    Muhiddin, F. A.; Sulaiman, J.

    2017-09-01

    The aim of this paper is to investigate the effectiveness of the Successive Over-Relaxation (SOR) iterative method by using the fourth-order Crank-Nicolson (CN) discretization scheme to derive a five-point Crank-Nicolson approximation equation in order to solve diffusion equation. From this approximation equation, clearly, it can be shown that corresponding system of five-point approximation equations can be generated and then solved iteratively. In order to access the performance results of the proposed iterative method with the fourth-order CN scheme, another point iterative method which is Gauss-Seidel (GS), also presented as a reference method. Finally the numerical results obtained from the use of the fourth-order CN discretization scheme, it can be pointed out that the SOR iterative method is superior in terms of number of iterations, execution time, and maximum absolute error.

  4. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  5. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  6. An open source, 3D printed preclinical MRI phantom for repeated measures of contrast agents and reference standards.

    PubMed

    Cox, B L; Ludwig, K D; Adamson, E B; Eliceiri, K W; Fain, S B

    2018-03-01

    In medical imaging, clinicians, researchers and technicians have begun to use 3D printing to create specialized phantoms to replace commercial ones due to their customizable and iterative nature. Presented here is the design of a 3D printed open source, reusable magnetic resonance imaging (MRI) phantom, capable of flood-filling, with removable samples for measurements of contrast agent solutions and reference standards, and for use in evaluating acquisition techniques and image reconstruction performance. The phantom was designed using SolidWorks, a computer-aided design software package. The phantom consists of custom and off-the-shelf parts and incorporates an air hole and Luer Lock system to aid in flood filling, a marker for orientation of samples in the filled mode and bolt and tube holes for assembly. The cost of construction for all materials is under $90. All design files are open-source and available for download. To demonstrate utility, B 0 field mapping was performed using a series of gadolinium concentrations in both the unfilled and flood-filled mode. An excellent linear agreement (R 2 >0.998) was observed between measured relaxation rates (R 1 /R 2 ) and gadolinium concentration. The phantom provides a reliable setup to test data acquisition and reconstruction methods and verify physical alignment in alternative nuclei MRI techniques (e.g. carbon-13 and fluorine-19 MRI). A cost-effective, open-source MRI phantom design for repeated quantitative measurement of contrast agents and reference standards in preclinical research is presented. Specifically, the work is an example of how the emerging technology of 3D printing improves flexibility and access for custom phantom design.

  7. A comparison theorem for the SOR iterative method

    NASA Astrophysics Data System (ADS)

    Sun, Li-Ying

    2005-09-01

    In 1997, Kohno et al. have reported numerically that the improving modified Gauss-Seidel method, which was referred to as the IMGS method, is superior to the SOR iterative method. In this paper, we prove that the spectral radius of the IMGS method is smaller than that of the SOR method and Gauss-Seidel method, if the relaxation parameter [omega][set membership, variant](0,1]. As a result, we prove theoretically that this method is succeeded in improving the convergence of some classical iterative methods. Some recent results are improved.

  8. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  9. Computed inverse MRI for magnetic susceptibility map reconstruction

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2015-01-01

    Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372

  10. VERSE: a novel approach to detect virus integration in host genomes through reference genome customization.

    PubMed

    Wang, Qingguo; Jia, Peilin; Zhao, Zhongming

    2015-01-01

    Fueled by widespread applications of high-throughput next generation sequencing (NGS) technologies and urgent need to counter threats of pathogenic viruses, large-scale studies were conducted recently to investigate virus integration in host genomes (for example, human tumor genomes) that may cause carcinogenesis or other diseases. A limiting factor in these studies, however, is rapid virus evolution and resulting polymorphisms, which prevent reads from aligning readily to commonly used virus reference genomes, and, accordingly, make virus integration sites difficult to detect. Another confounding factor is host genomic instability as a result of virus insertions. To tackle these challenges and improve our capability to identify cryptic virus-host fusions, we present a new approach that detects Virus intEgration sites through iterative Reference SEquence customization (VERSE). To the best of our knowledge, VERSE is the first approach to improve detection through customizing reference genomes. Using 19 human tumors and cancer cell lines as test data, we demonstrated that VERSE substantially enhanced the sensitivity of virus integration site detection. VERSE is implemented in the open source package VirusFinder 2 that is available at http://bioinfo.mc.vanderbilt.edu/VirusFinder/.

  11. Low-level rf control of Spallation Neutron Source: System and characterization

    NASA Astrophysics Data System (ADS)

    Ma, Hengjie; Champion, Mark; Crofford, Mark; Kasemir, Kay-Uwe; Piller, Maurice; Doolittle, Lawrence; Ratti, Alex

    2006-03-01

    The low-level rf control system currently commissioned throughout the Spallation Neutron Source (SNS) LINAC evolved from three design iterations over 1 yr intensive research and development. Its digital hardware implementation is efficient, and has succeeded in achieving a minimum latency of less than 150 ns which is the key for accomplishing an all-digital feedback control for the full bandwidth. The control bandwidth is analyzed in frequency domain and characterized by testing its transient response. The hardware implementation also includes the provision of a time-shared input channel for a superior phase differential measurement between the cavity field and the reference. A companion cosimulation system for the digital hardware was developed to ensure a reliable long-term supportability. A large effort has also been made in the operation software development for the practical issues such as the process automations, cavity filling, beam loading compensation, and the cavity mechanical resonance suppression.

  12. A diffusion-based truncated projection artifact reduction method for iterative digital breast tomosynthesis reconstruction

    PubMed Central

    Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M

    2014-01-01

    Digital breast tomosynthesis (DBT) has strong promise to improve sensitivity for detecting breast cancer. DBT reconstruction estimates the breast tissue attenuation using projection views (PVs) acquired in a limited angular range. Because of the limited field of view (FOV) of the detector, the PVs may not completely cover the breast in the x-ray source motion direction at large projection angles. The voxels in the imaged volume cannot be updated when they are outside the FOV, thus causing a discontinuity in intensity across the FOV boundaries in the reconstructed slices, which we refer to as the truncated projection artifact (TPA). Most existing TPA reduction methods were developed for the filtered backprojection method in the context of computed tomography. In this study, we developed a new diffusion-based method to reduce TPAs during DBT reconstruction using the simultaneous algebraic reconstruction technique (SART). Our TPA reduction method compensates for the discontinuity in background intensity outside the FOV of the current PV after each PV updating in SART. The difference in voxel values across the FOV boundary is smoothly diffused to the region beyond the FOV of the current PV. Diffusion-based background intensity estimation is performed iteratively to avoid structured artifacts. The method is applicable to TPA in both the forward and backward directions of the PVs and for any number of iterations during reconstruction. The effectiveness of the new method was evaluated by comparing the visual quality of the reconstructed slices and the measured discontinuities across the TPA with and without artifact correction at various iterations. The results demonstrated that the diffusion-based intensity compensation method reduced the TPA while preserving the detailed tissue structures. The visibility of breast lesions obscured by the TPA was improved after artifact reduction. PMID:23318346

  13. Solving large test-day models by iteration on data and preconditioned conjugate gradient.

    PubMed

    Lidauer, M; Strandén, I; Mäntysaari, E A; Pösö, J; Kettunen, A

    1999-12-01

    A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied. An algorithm was used as a reference in which one fixed effect was solved by Gauss-Seidel method, and other effects were solved by a second-order Jacobi method. Implementation of the preconditioned conjugate gradient required storing four vectors (size equal to number of unknowns in the mixed model equations) in random access memory and reading the data at each round of iteration. The preconditioner comprised diagonal blocks of the coefficient matrix. Comparison of algorithms was based on solutions of mixed model equations obtained by a single-trait animal model and a single-trait, random regression test-day model. Data sets for both models used milk yield records of primiparous Finnish dairy cows. Animal model data comprised 665,629 lactation milk yields and random regression test-day model data of 6,732,765 test-day milk yields. Both models included pedigree information of 1,099,622 animals. The animal model ¿random regression test-day model¿ required 122 ¿305¿ rounds of iteration to converge with the reference algorithm, but only 88 ¿149¿ were required with the preconditioned conjugate gradient. To solve the random regression test-day model with the preconditioned conjugate gradient required 237 megabytes of random access memory and took 14% of the computation time needed by the reference algorithm.

  14. Multiple Revolution Solutions for the Perturbed Lambert Problem using the Method of Particular Solutions and Picard Iteration

    NASA Astrophysics Data System (ADS)

    Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.

    2017-12-01

    We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.

  15. Strategy for the absolute neutron emission measurement on ITER.

    PubMed

    Sasao, M; Bertalot, L; Ishikawa, M; Popovichev, S

    2010-10-01

    Accuracy of 10% is demanded to the absolute fusion measurement on ITER. To achieve this accuracy, a functional combination of several types of neutron measurement subsystem, cross calibration among them, and in situ calibration are needed. Neutron transport calculation shows the suitable calibration source is a DT/DD neutron generator of source strength higher than 10(10) n/s (neutron/second) for DT and 10(8) n/s for DD. It will take eight weeks at the minimum with this source to calibrate flux monitors, profile monitors, and the activation system.

  16. Size scaling of negative hydrogen ion sources for fusion

    NASA Astrophysics Data System (ADS)

    Fantz, U.; Franzen, P.; Kraus, W.; Schiesko, L.; Wimmer, C.; Wünderlich, D.

    2015-04-01

    The RF-driven negative hydrogen ion source (H-, D-) for the international fusion experiment ITER has a width of 0.9 m and a height of 1.9 m and is based on a ⅛ scale prototype source being in operation at the IPP test facilities BATMAN and MANITU for many years. Among the challenges to meet the required parameters in a caesiated source at a source pressure of 0.3 Pa or less is the challenge in size scaling of a factor of eight. As an intermediate step a ½ scale ITER source went into operation at the IPP test facility ELISE with the first plasma in February 2013. The experience and results gained so far at ELISE allowed a size scaling study from the prototype source towards the ITER relevant size at ELISE, in which operational issues, physical aspects and the source performance is addressed, highlighting differences as well as similarities. The most ITER relevant results are: low pressure operation down to 0.2 Pa is possible without problems; the magnetic filter field created by a current in the plasma grid is sufficient to reduce the electron temperature below the target value of 1 eV and to reduce together with the bias applied between the differently shaped bias plate and the plasma grid the amount of co-extracted electrons. An asymmetry of the co-extracted electron currents in the two grid segments is measured, varying strongly with filter field and bias. Contrary to the prototype source, a dedicated plasma drift in vertical direction is not observed. As in the prototype source, the performance in deuterium is limited by the amount of co-extracted electrons in short as well as in long pulse operation. Caesium conditioning is much harder in deuterium than in hydrogen for which fast and reproducible conditioning is achieved. First estimates reveal a caesium consumption comparable to the one in the prototype source despite the large size.

  17. A trial of reliable estimation of non-double-couple component of microearthquakes

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Uchide, T.

    2017-12-01

    Although most tectonic earthquakes are caused by shear failure, it has been reported that injection-induced seismicity and earthquakes occurring in volcanoes and geothermal areas contain non double couple (non-DC) components (e.g, Dreger et al., 2000). Also in the tectonic earthquakes, small non-DC components are beginning to be detected (e.g, Ross et al., 2015). However, it is generally limited to relatively large earthquakes that the non-DC component can be estimated with sufficient accuracy. In order to gain further understanding of fluid-driven earthquakes and fault zone properties, it is important to estimate full moment tensor of many microearthquakes with high precision. In the last AGU meeting, we proposed a method that iteratively applies the relative moment tensor inversion (RMTI) (Dahm, 1996) to source clusters improving each moment tensor as well as their relative accuracy. This new method overcomes the problem of RMTI that errors in the mechanism of reference events lead to biased solutions for other events, while taking advantage of RMTI that the source mechanisms can be determined without a computation of Green's function. The procedure is briefly summarized as follows: (1) Sample co-located multiple earthquakes with focal mechanisms, as initial solutions, determined by an ordinary method. (2) Apply the RMTI to estimate the source mechanism of each event relative to those of the other events. (3) Repeat the step 2 for the modified source mechanisms until the reduction of total residual converges. In order to confirm whether the method can resolve non-DC components, we conducted numerical tests on synthetic data. Amplitudes were computed assuming non-DC sources, amplifying by factor between 0.2 and 4 as site effects, and adding 10% random noise. As initial solutions in the step 1, we gave DC sources with arbitrary strike, dip and rake angle. In a test with eight sources at 12 stations, for example, all solutions were successively improved by iteration. Non-DC components were successfully resolved in spite of the fact that we gave DC sources as initial solutions. The application of the method to microearthquakes in geothermal area in Japan will be presented.

  18. Correction of phase velocity bias caused by strong directional noise sources in high-frequency ambient noise tomography: a case study in Karamay, China

    NASA Astrophysics Data System (ADS)

    Wang, K.; Luo, Y.; Yang, Y.

    2016-12-01

    We collect two months of ambient noise data recorded by 35 broadband seismic stations in a 9×11 km area near Karamay, China, and do cross-correlation of noise data between all station pairs. Array beamforming analysis of the ambient noise data shows that ambient noise sources are unevenly distributed and the most energetic ambient noise mainly comes from azimuths of 40o-70o. As a consequence of the strong directional noise sources, surface wave waveforms of the cross-correlations at 1-5 Hz show clearly azimuthal dependence, and direct dispersion measurements from cross-correlations are strongly biased by the dominant noise energy. This bias renders that the dispersion measurements from cross-correlations do not accurately reflect the interstation velocities of surface waves propagating directly from one station to the other, that is, the cross-correlation functions do not retrieve Empirical Green's Functions accurately. To correct the bias caused by unevenly distributed noise sources, we adopt an iterative inversion procedure. The iterative inversion procedure, based on plane-wave modeling, includes three steps: (1) surface wave tomography, (2) estimation of ambient noise energy and (3) phase velocities correction. First, we use synthesized data to test efficiency and stability of the iterative procedure for both homogeneous and heterogeneous media. The testing results show that: (1) the amplitudes of phase velocity bias caused by directional noise sources are significant, reaching 2% and 10% for homogeneous and heterogeneous media, respectively; (2) phase velocity bias can be corrected by the iterative inversion procedure and the convergences of inversion depend on the starting phase velocity map and the complexity of the media. By applying the iterative approach to the real data in Karamay, we further show that phase velocity maps converge after ten iterations and the phase velocity map based on corrected interstation dispersion measurements are more consistent with results from geology surveys than those based on uncorrected ones. As ambient noise in high frequency band (>1Hz) is mostly related to human activities or climate events, both of which have strong directivity, the iterative approach demonstrated here helps improve the accuracy and resolution of ANT in imaging shallow earth structures.

  19. Accelerating NLTE radiative transfer by means of the Forth-and-Back Implicit Lambda Iteration: A two-level atom line formation in 2D Cartesian coordinates

    NASA Astrophysics Data System (ADS)

    Milić, Ivan; Atanacković, Olga

    2014-10-01

    State-of-the-art methods in multidimensional NLTE radiative transfer are based on the use of local approximate lambda operator within either Jacobi or Gauss-Seidel iterative schemes. Here we propose another approach to the solution of 2D NLTE RT problems, Forth-and-Back Implicit Lambda Iteration (FBILI), developed earlier for 1D geometry. In order to present the method and examine its convergence properties we use the well-known instance of the two-level atom line formation with complete frequency redistribution. In the formal solution of the RT equation we employ short characteristics with two-point algorithm. Using an implicit representation of the source function in the computation of the specific intensities, we compute and store the coefficients of the linear relations J=a+bS between the mean intensity J and the corresponding source function S. The use of iteration factors in the ‘local’ coefficients of these implicit relations in two ‘inward’ sweeps of 2D grid, along with the update of the source function in other two ‘outward’ sweeps leads to four times faster solution than the Jacobi’s one. Moreover, the update made in all four consecutive sweeps of the grid leads to an acceleration by a factor of 6-7 compared to the Jacobi iterative scheme.

  20. On the safety of ITER accelerators.

    PubMed

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER.

  1. On the safety of ITER accelerators

    PubMed Central

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate −1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER. PMID:24008267

  2. TH-AB-BRA-09: Stability Analysis of a Novel Dose Calculation Algorithm for MRI Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zelyak, O; Fallone, B; Cross Cancer Institute, Edmonton, AB

    2016-06-15

    Purpose: To determine the iterative deterministic solution stability of the Linear Boltzmann Transport Equation (LBTE) in the presence of magnetic fields. Methods: The LBTE with magnetic fields under investigation is derived using a discrete ordinates approach. The stability analysis is performed using analytical and numerical methods. Analytically, the spectral Fourier analysis is used to obtain the convergence rate of the source iteration procedures based on finding the largest eigenvalue of the iterative operator. This eigenvalue is a function of relevant physical parameters, such as magnetic field strength and material properties, and provides essential information about the domain of applicability requiredmore » for clinically optimal parameter selection and maximum speed of convergence. The analytical results are reinforced by numerical simulations performed using the same discrete ordinates method in angle, and a discontinuous finite element spatial approach. Results: The spectral radius for the source iteration technique of the time independent transport equation with isotropic and anisotropic scattering centers inside infinite 3D medium is equal to the ratio of differential and total cross sections. The result is confirmed numerically by solving LBTE and is in full agreement with previously published results. The addition of magnetic field reveals that the convergence becomes dependent on the strength of magnetic field, the energy group discretization, and the order of anisotropic expansion. Conclusion: The source iteration technique for solving the LBTE with magnetic fields with the discrete ordinates method leads to divergent solutions in the limiting cases of small energy discretizations and high magnetic field strengths. Future investigations into non-stationary Krylov subspace techniques as an iterative solver will be performed as this has been shown to produce greater stability than source iteration. Furthermore, a stability analysis of a discontinuous finite element space-angle approach (which has been shown to provide the greatest stability) will also be investigated. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less

  3. Bragg x-ray survey spectrometer for ITER.

    PubMed

    Varshney, S K; Barnsley, R; O'Mullane, M G; Jakhar, S

    2012-10-01

    Several potential impurity ions in the ITER plasmas will lead to loss of confined energy through line and continuum emission. For real time monitoring of impurities, a seven channel Bragg x-ray spectrometer (XRCS survey) is considered. This paper presents design and analysis of the spectrometer, including x-ray tracing by the Shadow-XOP code, sensitivity calculations for reference H-mode plasma and neutronics assessment. The XRCS survey performance analysis shows that the ITER measurement requirements of impurity monitoring in 10 ms integration time at the minimum levels for low-Z to high-Z impurity ions can largely be met.

  4. Investigation of plasma parameters at BATMAN for variation of the Cs evaporation asymmetry and comparing two driver geometries

    NASA Astrophysics Data System (ADS)

    Wimmer, C.; Fantz, U.; Aza, E.; Jovović, J.; Kraus, W.; Mimo, A.; Schiesko, L.

    2017-08-01

    The Neutral Beam Injection (NBI) system for fusion devices like ITER and, beyond ITER, DEMO requires large scale sources for negative hydrogen ions. BATMAN (Bavarian Test Machine for Negative ions) is a test facility attached with the prototype source for the ITER NBI (1/8 source size of the ITER source), dedicated to physical investigations due to its flexible access for diagnostics and exchange of source components. The required amount of negative ions is produced by surface conversion of hydrogen atoms or ions on caesiated surfaces. Several diagnostic tools (Optical Emission Spectroscopy, Cavity Ring-Down Spectroscopy for H-, Langmuir probes, Tunable Diode Laser Absorption Spectroscopy for Cs) allow the determination of plasma parameters in the ion source. Plasma parameters for two modifications of the standard prototype source have been investigated: Firstly, a second Cs oven has been installed in the bottom part of the back plate in addition to the regularly used oven in the top part of the back plate. Evaporation from the top oven only can lead to a vertically asymmetric Cs distribution in front of the plasma grid. Using both ovens, a symmetric Cs distribution can be reached - however, in most cases no significant change of the extracted ion current has been determined for varying Cs symmetry if the source is well-conditioned. Secondly, BATMAN has been equipped with a much larger, racetrack-shaped RF driver (area of 32×58 cm2) instead of the cylindrical RF driver (diameter of 24.5 cm). The main idea is that one racetrack driver could substitute two cylindrical drivers in larger sources with increased reliability and power efficiency. For the same applied RF power, the electron density is lower in the racetrack driver due to its five times higher volume. The fraction of hydrogen atoms to molecules, however, is at a similar level or even slightly higher, which is a promising result for application in larger sources.

  5. Iterative reconstruction in single source dual-energy CT pulmonary angiography: Is it sufficient to achieve a radiation dose as low as state-of-the-art single-energy CTPA?

    PubMed

    Ohana, M; Labani, A; Jeung, M Y; El Ghannudi, S; Gaertner, S; Roy, C

    2015-11-01

    Dual-energy (DE) brings numerous significant improvements in pulmonary CT angiography (CTPA), but is associated with a 15-50% increase in radiation dose that prevents its widespread use. We hypothesize that thanks to iterative reconstruction (IR), single source DE-CTPA acquired at the same radiation dose that a single-energy examination will maintain an equivalent quantitative and qualitative image quality, allowing a more extensive use of the DE technique in the clinical routine. Fifty patients (58% men, mean age 64.8yo ± 16.2, mean BMI 25.6 ± 4.5) were prospectively included and underwent single source DE-CTPA with acquisition parameters (275 mA fixed tube current, 50% IR) tweaked to target a radiation dose similar to a 100 kV single-energy CTPA (SE-CTPA), i.e., a DLP of 260 mGy cm. Thirty patients (47% men, 64.4yo ± 18.6, BMI 26.2 ± 4.6) from a previous prospective study on DE-CTPA (375 mA fixed tube current, reconstruction with filtered-back projection) were used as the reference group. Thirty-five consecutive patients (57% men, 65.8yo ± 15.5, BMI 25.7 ± 4.4) who underwent SE-CTPA on the same scanner (automated tube current modulation, 50% IR) served as a comparison. Subjective image quality was scored by two radiologists using a 5-level scale and compared with a Kruskal-Wallis nonparametric test. Density measurements on the 65 keV monochromatic reconstructions were used to calculate signal-to-noise (SNR) and contrast-to-noise (CNR) ratios that were compared using a Student's t test. Correlations between image quality, SNR, CNR and BMI were sought using a Pearson's test. p<0.05 was considered significant. All examinations were of diagnostic quality (score ≥ 3). In comparison with the reference DE-CTPA and the SE-CTPA protocols, the DE-IR group exhibited a non-inferior image quality (p=0.95 and p=0.21, respectively) and a significantly lower mean image noise (p<0.01 and p=0.01) thus slightly improving the SNR (p=0.09 and p=0.47) and the CNR (p=0.12 and p=0.51). There was a strong negative relationship between BMI and SNR/CNR (ρ=-0.59 and -0.55 respectively), but only a moderate negative relationship between BMI and image quality (ρ=-0.27). With iterative reconstruction, objective and subjective image quality of single source DE-CTPA are preserved even though the radiation dose is lowered to that of a single-energy examination, overcoming a major limitation of the DE technique and allowing a widespread use in the clinical routine. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Enabling Incremental Iterative Development at Scale: Quality Attribute Refinement and Allocation in Practice

    DTIC Science & Technology

    2015-06-01

    abstract constraints along six dimen- sions for expansion: user, actions, data , business rules, interfaces, and quality attributes [Gottesdiener 2010...relevant open source systems. For example, the CONNECT and HADOOP Distributed File System (HDFS) projects have many user stories that deal with...Iteration Zero involves architecture planning before writing any code. An overly long Iteration Zero is equivalent to the dysfunctional “ Big Up-Front

  7. An iterative method for near-field Fresnel region polychromatic phase contrast imaging

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2017-07-01

    We present an iterative method for polychromatic phase contrast imaging that is suitable for broadband illumination and which allows for the quantitative determination of the thickness of an object given the refractive index of the sample material. Experimental and simulation results suggest the iterative method provides comparable image quality and quantitative object thickness determination when compared to the analytical polychromatic transport of intensity and contrast transfer function methods. The ability of the iterative method to work over a wider range of experimental conditions means the iterative method is a suitable candidate for use with polychromatic illumination and may deliver more utility for laboratory-based x-ray sources, which typically have a broad spectrum.

  8. Fatty acid methyl ester analysis to identify sources of soil in surface water.

    PubMed

    Banowetz, Gary M; Whittaker, Gerald W; Dierksen, Karen P; Azevedo, Mark D; Kennedy, Ann C; Griffith, Stephen M; Steiner, Jeffrey J

    2006-01-01

    Efforts to improve land-use practices to prevent contamination of surface waters with soil are limited by an inability to identify the primary sources of soil present in these waters. We evaluated the utility of fatty acid methyl ester (FAME) profiles of dry reference soils for multivariate statistical classification of soils collected from surface waters adjacent to agricultural production fields and a wooded riparian zone. Trials that compared approaches to concentrate soil from surface water showed that aluminum sulfate precipitation provided comparable yields to that obtained by vacuum filtration and was more suitable for handling large numbers of samples. Fatty acid methyl ester profiles were developed from reference soils collected from contrasting land uses in different seasons to determine whether specific fatty acids would consistently serve as variables in multivariate statistical analyses to permit reliable classification of soils. We used a Bayesian method and an independent iterative process to select appropriate fatty acids and found that variable selection was strongly impacted by the season during which soil was collected. The apparent seasonal variation in the occurrence of marker fatty acids in FAME profiles from reference soils prevented preparation of a standardized set of variables. Nevertheless, accurate classification of soil in surface water was achieved utilizing fatty acid variables identified in seasonally matched reference soils. Correlation analysis of entire chromatograms and subsequent discriminant analyses utilizing a restricted number of fatty acid variables showed that FAME profiles of soils exposed to the aquatic environment still had utility for classification at least 1 wk after submersion.

  9. A new least-squares transport equation compatible with voids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, J. B.; Morel, J. E.

    2013-07-01

    We define a new least-squares transport equation that is applicable in voids, can be solved using source iteration with diffusion-synthetic acceleration, and requires only the solution of an independent set of second-order self-adjoint equations for each direction during each source iteration. We derive the equation, discretize it using the S{sub n} method in conjunction with a linear-continuous finite-element method in space, and computationally demonstrate various of its properties. (authors)

  10. Georgia Tech Studies of Sub-Critical Advanced Burner Reactors with a D-T Fusion Tokamak Neutron Source for the Transmutation of Spent Nuclear Fuel

    NASA Astrophysics Data System (ADS)

    Stacey, W. M.

    2009-09-01

    The possibility that a tokamak D-T fusion neutron source, based on ITER physics and technology, could be used to drive sub-critical, fast-spectrum nuclear reactors fueled with the transuranics (TRU) in spent nuclear fuel discharged from conventional nuclear reactors has been investigated at Georgia Tech in a series of studies which are summarized in this paper. It is found that sub-critical operation of such fast transmutation reactors is advantageous in allowing longer fuel residence time, hence greater TRU burnup between fuel reprocessing stages, and in allowing higher TRU loading without compromising safety, relative to what could be achieved in a similar critical transmutation reactor. The required plasma and fusion technology operating parameter range of the fusion neutron source is generally within the anticipated operational range of ITER. The implications of these results for fusion development policy, if they hold up under more extensive and detailed analysis, is that a D-T fusion tokamak neutron source for a sub-critical transmutation reactor, built on the basis of the ITER operating experience, could possibly be a logical next step after ITER on the path to fusion electrical power reactors. At the same time, such an application would allow fusion to contribute to meeting the nation's energy needs at an earlier stage by helping to close the fission reactor nuclear fuel cycle.

  11. Characterization of the ITER CS conductor and projection to the ITER CS performance

    DOE PAGES

    Martovetsky, N.; Isono, T.; Bessette, D.; ...

    2017-06-20

    The ITER Central Solenoid (CS) is one of the critical elements of the machine. The CS conductor went through an intense optimization and qualification program, which included characterization of the strands, a conductor straight short sample testing in the SULTAN facility at the Swiss Plasma Center (SPC), Villigen, Switzerland, and a single-layer CS Insert coil recently tested in the Central Solenoid Model Coil (CSMC) facility in QST-Naka, Japan. In this paper, we obtained valuable data in a wide range of the parameters (current, magnetic field, temperature, and strain), which allowed a credible characterization of the CS conductor in different conditions.more » Finally, using this characterization, we will make a projection to the performance of the CS in the ITER reference scenario.« less

  12. Characterization of the ITER CS conductor and projection to the ITER CS performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martovetsky, N.; Isono, T.; Bessette, D.

    The ITER Central Solenoid (CS) is one of the critical elements of the machine. The CS conductor went through an intense optimization and qualification program, which included characterization of the strands, a conductor straight short sample testing in the SULTAN facility at the Swiss Plasma Center (SPC), Villigen, Switzerland, and a single-layer CS Insert coil recently tested in the Central Solenoid Model Coil (CSMC) facility in QST-Naka, Japan. In this paper, we obtained valuable data in a wide range of the parameters (current, magnetic field, temperature, and strain), which allowed a credible characterization of the CS conductor in different conditions.more » Finally, using this characterization, we will make a projection to the performance of the CS in the ITER reference scenario.« less

  13. Simulation of Fusion Plasmas

    ScienceCinema

    Holland, Chris [UC San Diego, San Diego, California, United States

    2017-12-09

    The upcoming ITER experiment (www.iter.org) represents the next major milestone in realizing the promise of using nuclear fusion as a commercial energy source, by moving into the “burning plasma” regime where the dominant heat source is the internal fusion reactions. As part of its support for the ITER mission, the US fusion community is actively developing validated predictive models of the behavior of magnetically confined plasmas. In this talk, I will describe how the plasma community is using the latest high performance computing facilities to develop and refine our models of the nonlinear, multiscale plasma dynamics, and how recent advances in experimental diagnostics are allowing us to directly test and validate these models at an unprecedented level.

  14. Solution of the weighted symmetric similarity transformations based on quaternions

    NASA Astrophysics Data System (ADS)

    Mercan, H.; Akyilmaz, O.; Aydin, C.

    2017-12-01

    A new method through Gauss-Helmert model of adjustment is presented for the solution of the similarity transformations, either 3D or 2D, in the frame of errors-in-variables (EIV) model. EIV model assumes that all the variables in the mathematical model are contaminated by random errors. Total least squares estimation technique may be used to solve the EIV model. Accounting for the heteroscedastic uncertainty both in the target and the source coordinates, that is the more common and general case in practice, leads to a more realistic estimation of the transformation parameters. The presented algorithm can handle the heteroscedastic transformation problems, i.e., positions of the both target and the source points may have full covariance matrices. Therefore, there is no limitation such as the isotropic or the homogenous accuracy for the reference point coordinates. The developed algorithm takes the advantage of the quaternion definition which uniquely represents a 3D rotation matrix. The transformation parameters: scale, translations, and the quaternion (so that the rotation matrix) along with their covariances, are iteratively estimated with rapid convergence. Moreover, prior least squares (LS) estimation of the unknown transformation parameters is not required to start the iterations. We also show that the developed method can also be used to estimate the 2D similarity transformation parameters by simply treating the problem as a 3D transformation problem with zero (0) values assigned for the z-components of both target and source points. The efficiency of the new algorithm is presented with the numerical examples and comparisons with the results of the previous studies which use the same data set. Simulation experiments for the evaluation and comparison of the proposed and the conventional weighted LS (WLS) method is also presented.

  15. A suite of diagnostics to validate and optimize the prototype ITER neutral beam injector

    NASA Astrophysics Data System (ADS)

    Pasqualotto, R.; Agostini, M.; Barbisan, M.; Brombin, M.; Cavazzana, R.; Croci, G.; Dalla Palma, M.; Delogu, R. S.; De Muri, M.; Muraro, A.; Peruzzo, S.; Pimazzoni, A.; Pomaro, N.; Rebai, M.; Rizzolo, A.; Sartori, E.; Serianni, G.; Spagnolo, S.; Spolaore, M.; Tardocchi, M.; Zaniol, B.; Zaupa, M.

    2017-10-01

    The ITER project requires additional heating provided by two neutral beam injectors using 40 A negative deuterium ions accelerated at 1 MV. As the beam requirements have never been experimentally met, a test facility is under construction at Consorzio RFX, which hosts two experiments: SPIDER, full-size 100 kV ion source prototype, and MITICA, 1 MeV full-size ITER injector prototype. Since diagnostics in ITER injectors will be mainly limited to thermocouples, due to neutron and gamma radiation and to limited access, it is crucial to thoroughly investigate and characterize in more accessible experiments the key parameters of source plasma and beam, using several complementary diagnostics assisted by modelling. In SPIDER and MITICA the ion source parameters will be measured by optical emission spectroscopy, electrostatic probes, cavity ring down spectroscopy for H^- density and laser absorption spectroscopy for cesium density. Measurements over multiple lines-of-sight will provide the spatial distribution of the parameters over the source extension. The beam profile uniformity and its divergence are studied with beam emission spectroscopy, complemented by visible tomography and neutron imaging, which are novel techniques, while an instrumented calorimeter based on custom unidirectional carbon fiber composite tiles observed by infrared cameras will measure the beam footprint on short pulses with the highest spatial resolution. All heated components will be monitored with thermocouples: as these will likely be the only measurements available in ITER injectors, their capabilities will be investigated by comparison with other techniques. SPIDER and MITICA diagnostics are described in the present paper with a focus on their rationale, key solutions and most original and effective implementations.

  16. A Review on Medical Image Registration as an Optimization Problem

    PubMed Central

    Song, Guoli; Han, Jianda; Zhao, Yiwen; Wang, Zheng; Du, Huibin

    2017-01-01

    Objective: In the course of clinical treatment, several medical media are required by a phy-sician in order to provide accurate and complete information about a patient. Medical image registra-tion techniques can provide a richer diagnosis and treatment information to doctors and to provide a comprehensive reference source for the researchers involved in image registration as an optimization problem. Methods: The essence of image registration is associating two or more different images spatial asso-ciation, and getting the translation of their spatial relationship. For medical image registration, its pro-cess is not absolute. Its core purpose is finding the conversion relationship between different images. Result: The major step of image registration includes the change of geometrical dimensions, and change of the image of the combination, image similarity measure, iterative optimization and interpo-lation process. Conclusion: The contribution of this review is sort of related image registration research methods, can provide a brief reference for researchers about image registration. PMID:28845149

  17. Deuterium results at the negative ion source test facility ELISE

    NASA Astrophysics Data System (ADS)

    Kraus, W.; Wünderlich, D.; Fantz, U.; Heinemann, B.; Bonomo, F.; Riedl, R.

    2018-05-01

    The ITER neutral beam system will be equipped with large radio frequency (RF) driven negative ion sources, with a cross section of 0.9 m × 1.9 m, which have to deliver extracted D- ion beams of 57 A at 1 MeV for 1 h. On the extraction from a large ion source experiment test facility, a source of half of this size is being operational since 2013. The goal of this experiment is to demonstrate a high operational reliability and to achieve the extracted current densities and beam properties required for ITER. Technical improvements of the source design and the RF system were necessary to provide reliable operation in steady state with an RF power of up to 300 kW. While in short pulses the required D- current density has almost been reached, the performance in long pulses is determined in particular in Deuterium by inhomogeneous and unstable currents of co-extracted electrons. By application of refined caesium evaporation and distribution procedures, and reduction and symmetrization of the electron currents, considerable progress has been made and up to 190 A/m2 D-, corresponding to 66% of the value required for ITER, have been extracted for 45 min.

  18. Influence of Iterative Reconstruction Algorithms on PET Image Resolution

    NASA Astrophysics Data System (ADS)

    Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.

  19. Can we estimate plasma density in ICP driver through electrical parameters in RF circuit?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bandyopadhyay, M., E-mail: mainak@iter-india.org; Sudhir, Dass, E-mail: dass.sudhir@iter-india.org; Chakraborty, A., E-mail: arunkc@iter-india.org

    2015-04-08

    To avoid regular maintenance, invasive plasma diagnostics with probes are not included in the inductively coupled plasma (ICP) based ITER Neutral Beam (NB) source design. Even non-invasive probes like optical emission spectroscopic diagnostics are also not included in the present ITER NB design due to overall system design and interface issues. As a result, negative ion beam current through the extraction system in the ITER NB negative ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveancemore » condition of the ion extractor system and negative ion stripping. Nevertheless, inductively coupled plasma production region (RF driver region) is placed at distance (∼ 30cm) from the extraction region. Due to that, some uncertainties are expected to be involved if one tries to link beam current with plasma properties inside the RF driver. Plasma characterization in source RF driver region is utmost necessary to maintain the optimum condition for source operation. In this paper, a method of plasma density estimation is described, based on density dependent plasma load calculation.« less

  20. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  1. Multilevel acceleration of scattering-source iterations with application to electron transport

    DOE PAGES

    Drumm, Clif; Fan, Wesley

    2017-08-18

    Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less

  2. Towards plasma cleaning of ITER first mirrors

    NASA Astrophysics Data System (ADS)

    Moser, L.; Marot, L.; Eren, B.; Steiner, R.; Mathys, D.; Leipold, F.; Reichle, R.; Meyer, E.

    2015-06-01

    To avoid reflectivity losses in ITER's optical diagnostic systems, on-site cleaning of metallic first mirrors via plasma sputtering is foreseen to remove deposit build-ups migrating from the main wall. In this work, the influence of aluminium and tungsten deposits on the reflectivity of molybdenum mirrors as well as the possibility to clean them with plasma exposure is investigated. Porous ITER-like deposits are grown to mimic the edge conditions expected in ITER, and a severe degradation in the specular reflectivity is observed as these deposits build up on the mirror surface. In addition, dense oxide films are produced for comparisons with porous films. The composition, morphology and crystal structure of several films were characterized by means of scanning electron microscopy, x-ray photoelectron spectroscopy, x-ray diffraction and secondary ion mass spectrometry. The cleaning of the deposits and the restoration of the mirrors' optical properties are possible either with a Kaufman source or radio frequency directly applied to the mirror (or radio frequency plasma generated directly around the mirror surface). Accelerating ions of an external plasma source through a direct current applied onto the mirror does not remove deposits composed of oxides. A possible implementation of plasma cleaning in ITER is addressed.

  3. Ultralow-dose CT of the craniofacial bone for navigated surgery using adaptive statistical iterative reconstruction and model-based iterative reconstruction: 2D and 3D image quality.

    PubMed

    Widmann, Gerlig; Schullian, Peter; Gassner, Eva-Maria; Hoermann, Romed; Bale, Reto; Puelacher, Wolfgang

    2015-03-01

    OBJECTIVE. The purpose of this article is to evaluate 2D and 3D image quality of high-resolution ultralow-dose CT images of the craniofacial bone for navigated surgery using adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) in comparison with standard filtered backprojection (FBP). MATERIALS AND METHODS. A formalin-fixed human cadaver head was scanned using a clinical reference protocol at a CT dose index volume of 30.48 mGy and a series of five ultralow-dose protocols at 3.48, 2.19, 0.82, 0.44, and 0.22 mGy using FBP and ASIR at 50% (ASIR-50), ASIR at 100% (ASIR-100), and MBIR. Blinded 2D axial and 3D volume-rendered images were compared with each other by three readers using top-down scoring. Scores were analyzed per protocol or dose and reconstruction. All images were compared with the FBP reference at 30.48 mGy. A nonparametric Mann-Whitney U test was used. Statistical significance was set at p < 0.05. RESULTS. For 2D images, the FBP reference at 30.48 mGy did not statistically significantly differ from ASIR-100 at 3.48 mGy, ASIR-100 at 2.19 mGy, and MBIR at 0.82 mGy. MBIR at 2.19 and 3.48 mGy scored statistically significantly better than the FBP reference (p = 0.032 and 0.001, respectively). For 3D images, the FBP reference at 30.48 mGy did not statistically significantly differ from all reconstructions at 3.48 mGy; FBP and ASIR-100 at 2.19 mGy; FBP, ASIR-100, and MBIR at 0.82 mGy; MBIR at 0.44 mGy; and MBIR at 0.22 mGy. CONCLUSION. MBIR (2D and 3D) and ASIR-100 (2D) may significantly improve subjective image quality of ultralow-dose images and may allow more than 90% dose reductions.

  4. Upgrade of the BATMAN test facility for H- source development

    NASA Astrophysics Data System (ADS)

    Heinemann, B.; Fröschle, M.; Falter, H.-D.; Fantz, U.; Franzen, P.; Kraus, W.; Nocentini, R.; Riedl, R.; Ruf, B.

    2015-04-01

    The development of a radio frequency (RF) driven source for negative hydrogen ions for the neutral beam heating devices of fusion experiments has been successfully carried out at IPP since 1996 on the test facility BATMAN. The required ITER parameters have been achieved with the prototype source consisting of a cylindrical driver on the back side of a racetrack like expansion chamber. The extraction system, called "Large Area Grid" (LAG) was derived from a positive ion accelerator from ASDEX Upgrade (AUG) using its aperture size (ø 8 mm) and pattern but replacing the first two electrodes and masking down the extraction area to 70 cm2. BATMAN is a well diagnosed and highly flexible test facility which will be kept operational in parallel to the half size ITER source test facility ELISE for further developments to improve the RF efficiency and the beam properties. It is therefore planned to upgrade BATMAN with a new ITER-like grid system (ILG) representing almost one ITER beamlet group, namely 5 × 14 apertures (ø 14 mm). Additionally to the standard three grid extraction system a repeller electrode upstream of the grounded grid can optionally be installed which is positively charged against it by 2 kV. This is designated to affect the onset of the space charge compensation downstream of the grounded grid and to reduce the backstreaming of positive ions from the drift space backwards into the ion source. For magnetic filter field studies a plasma grid current up to 3 kA will be available as well as permanent magnets embedded into a diagnostic flange or in an external magnet frame. Furthermore different source vessels and source configurations are under discussion for BATMAN, e.g. using the AUG type racetrack RF source as driver instead of the circular one or modifying the expansion chamber for a more flexible position of the external magnet frame.

  5. Technical Note: FreeCT_ICD: An Open Source Implementation of a Model-Based Iterative Reconstruction Method using Coordinate Descent Optimization for CT Imaging Investigations.

    PubMed

    Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael

    2018-06-01

    To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric thoracic scan. For the ACR phantom, image quality was comparable to clinical reconstructions as well as reconstructions using open-source FreeCT_wFBP software. The pediatric thoracic scan also yielded acceptable results. In addition, we did not observe any deleterious impact in image quality associated with the utilization of rotating slices. These evaluations also demonstrated reasonable tradeoffs in storage requirements and computational demands. FreeCT_ICD is an open-source implementation of a model-based iterative reconstruction method that extends the capabilities of previously released open source reconstruction software and provides the ability to perform vendor-independent reconstructions of clinically acquired raw projection data. This implementation represents a reasonable tradeoff between storage and computational requirements and has demonstrated acceptable image quality in both simulated and clinical image datasets. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  6. Steady state numerical solutions for determining the location of MEMS on projectile

    NASA Astrophysics Data System (ADS)

    Abiprayu, K.; Abdigusna, M. F. F.; Gunawan, P. H.

    2018-03-01

    This paper is devoted to compare the numerical solutions for the steady and unsteady state heat distribution model on projectile. Here, the best location for installing of the MEMS on the projectile based on the surface temperature is investigated. Numerical iteration methods, Jacobi and Gauss-Seidel have been elaborated to solve the steady state heat distribution model on projectile. The results using Jacobi and Gauss-Seidel are shown identical but the discrepancy iteration cost for each methods is gained. Using Jacobi’s method, the iteration cost is 350 iterations. Meanwhile, using Gauss-Seidel 188 iterations are obtained, faster than the Jacobi’s method. The comparison of the simulation by steady state model and the unsteady state model by a reference is shown satisfying. Moreover, the best candidate for installing MEMS on projectile is observed at pointT(10, 0) which has the lowest temperature for the other points. The temperature using Jacobi and Gauss-Seidel for scenario 1 and 2 atT(10, 0) are 307 and 309 Kelvin respectively.

  7. Complex amplitude reconstruction by iterative amplitude-phase retrieval algorithm with reference

    NASA Astrophysics Data System (ADS)

    Shen, Cheng; Guo, Cheng; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun

    2018-06-01

    Multi-image iterative phase retrieval methods have been successfully applied in plenty of research fields due to their simple but efficient implementation. However, there is a mismatch between the measurement of the first long imaging distance and the sequential interval. In this paper, an amplitude-phase retrieval algorithm with reference is put forward without additional measurements or priori knowledge. It gets rid of measuring the first imaging distance. With a designed update formula, it significantly raises the convergence speed and the reconstruction fidelity, especially in phase retrieval. Its superiority over the original amplitude-phase retrieval (APR) method is validated by numerical analysis and experiments. Furthermore, it provides a conceptual design of a compact holographic image sensor, which can achieve numerical refocusing easily.

  8. Radiation pattern synthesis of planar antennas using the iterative sampling method

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Coffey, E. L.

    1975-01-01

    A synthesis method is presented for determining an excitation of an arbitrary (but fixed) planar source configuration. The desired radiation pattern is specified over all or part of the visible region. It may have multiple and/or shaped main beams with low sidelobes. The iterative sampling method is used to find an excitation of the source which yields a radiation pattern that approximates the desired pattern to within a specified tolerance. In this paper the method is used to calculate excitations for line sources, linear arrays (equally and unequally spaced), rectangular apertures, rectangular arrays (arbitrary spacing grid), and circular apertures. Examples using these sources to form patterns with shaped main beams, multiple main beams, shaped sidelobe levels, and combinations thereof are given.

  9. Randomly iterated search and statistical competency as powerful inversion tools for deformation source modeling: Application to volcano interferometric synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Walter, T. R.

    2009-10-01

    Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.

  10. Fusion energy

    NASA Astrophysics Data System (ADS)

    1990-09-01

    The main purpose of the International Thermonuclear Experimental Reactor (ITER) is to develop an experimental fusion reactor through the united efforts of many technologically advanced countries. The ITER terms of reference, issued jointly by the European Community, Japan, the USSR, and the United States, call for an integrated international design activity and constitute the basis of current activities. Joint work on ITER is carried out under the auspices of the International Atomic Energy Agency (IAEA), according to the terms of quadripartite agreement reached between the European Community, Japan, the USSR, and the United States. The site for joint technical work sessions is at the Max Planck Institute of Plasma Physics. Garching, Federal Republic of Germany. The ITER activities have two phases: a definition phase performed in 1988 and the present design phase (1989 to 1990). During the definition phase, a set of ITER technical characteristics and supporting research and development (R and D) activities were developed and reported. The present conceptual design phase of ITER lasts until the end of 1990. The objectives of this phase are to develop the design of ITER, perform a safety and environmental analysis, develop site requirements, define future R and D needs, and estimate cost, manpower, and schedule for construction and operation. A final report will be submitted at the end of 1990. This paper summarizes progress in the ITER program during the 1989 design phase.

  11. Modelling of caesium dynamics in the negative ion sources at BATMAN and ELISE

    NASA Astrophysics Data System (ADS)

    Mimo, A.; Wimmer, C.; Wünderlich, D.; Fantz, U.

    2017-08-01

    The knowledge of Cs dynamics in negative hydrogen ion sources is a primary issue to achieve the ITER requirements for the Neutral Beam Injection (NBI) systems, i.e. one hour operation with an accelerated ion current of 40 A of D- and a ratio between negative ions and co-extracted electrons below one. Production of negative ions is mostly achieved by conversion of hydrogen/deuterium atoms on a converter surface, which is caesiated in order to reduce the work function and increase the conversion efficiency. The understanding of the Cs transport and redistribution mechanism inside the source is necessary for the achievement of high performances. Cs dynamics was therefore investigated by means of numerical simulations performed with the Monte Carlo transport code CsFlow3D. Simulations of the prototype source (1/8 of the ITER NBI source size) have shown that the plasma distribution inside the source has the major effect on Cs dynamics during the pulse: asymmetry of the plasma parameters leads to asymmetry in Cs distribution in front of the plasma grid. The simulated time traces and the general simulation results are in agreement with the experimental measurements. Simulations performed for the ELISE testbed (half of the ITER NBI source size) have shown an effect of the vacuum phase time on the amount and stability of Cs during the pulse. The sputtering of Cs due to back-streaming ions was reproduced by the simulations and it is in agreement with the experimental observation: this can become a critical issue during long pulses, especially in case of continuous extraction as foreseen for ITER. These results and the acquired knowledge of Cs dynamics will be useful to have a better management of Cs and thus to reduce its consumption, in the direction of the demonstration fusion power plant DEMO.

  12. Plasma-surface interaction in the context of ITER.

    PubMed

    Kleyn, A W; Lopes Cardozo, N J; Samm, U

    2006-04-21

    The decreasing availability of energy and concern about climate change necessitate the development of novel sustainable energy sources. Fusion energy is such a source. Although it will take several decades to develop it into routinely operated power sources, the ultimate potential of fusion energy is very high and badly needed. A major step forward in the development of fusion energy is the decision to construct the experimental test reactor ITER. ITER will stimulate research in many areas of science. This article serves as an introduction to some of those areas. In particular, we discuss research opportunities in the context of plasma-surface interactions. The fusion plasma, with a typical temperature of 10 keV, has to be brought into contact with a physical wall in order to remove the helium produced and drain the excess energy in the fusion plasma. The fusion plasma is far too hot to be brought into direct contact with a physical wall. It would degrade the wall and the debris from the wall would extinguish the plasma. Therefore, schemes are developed to cool down the plasma locally before it impacts on a physical surface. The resulting plasma-surface interaction in ITER is facing several challenges including surface erosion, material redeposition and tritium retention. In this article we introduce how the plasma-surface interaction relevant for ITER can be studied in small scale experiments. The various requirements for such experiments are introduced and examples of present and future experiments will be given. The emphasis in this article will be on the experimental studies of plasma-surface interactions.

  13. Review of particle-in-cell modeling for the extraction region of large negative hydrogen ion sources for fusion

    NASA Astrophysics Data System (ADS)

    Wünderlich, D.; Mochalskyy, S.; Montellano, I. M.; Revel, A.

    2018-05-01

    Particle-in-cell (PIC) codes are used since the early 1960s for calculating self-consistently the motion of charged particles in plasmas, taking into account external electric and magnetic fields as well as the fields created by the particles itself. Due to the used very small time steps (in the order of the inverse plasma frequency) and mesh size, the computational requirements can be very high and they drastically increase with increasing plasma density and size of the calculation domain. Thus, usually small computational domains and/or reduced dimensionality are used. In the last years, the available central processing unit (CPU) power strongly increased. Together with a massive parallelization of the codes, it is now possible to describe in 3D the extraction of charged particles from a plasma, using calculation domains with an edge length of several centimeters, consisting of one extraction aperture, the plasma in direct vicinity of the aperture, and a part of the extraction system. Large negative hydrogen or deuterium ion sources are essential parts of the neutral beam injection (NBI) system in future fusion devices like the international fusion experiment ITER and the demonstration reactor (DEMO). For ITER NBI RF driven sources with a source area of 0.9 × 1.9 m2 and 1280 extraction apertures will be used. The extraction of negative ions is accompanied by the co-extraction of electrons which are deflected onto an electron dump. Typically, the maximum negative extracted ion current is limited by the amount and the temporal instability of the co-extracted electrons, especially for operation in deuterium. Different PIC codes are available for the extraction region of large driven negative ion sources for fusion. Additionally, some effort is ongoing in developing codes that describe in a simplified manner (coarser mesh or reduced dimensionality) the plasma of the whole ion source. The presentation first gives a brief overview of the current status of the ion source development for ITER NBI and of the PIC method. Different PIC codes for the extraction region are introduced as well as the coupling to codes describing the whole source (PIC codes or fluid codes). Presented and discussed are different physical and numerical aspects of applying PIC codes to negative hydrogen ion sources for fusion as well as selected code results. The main focus of future calculations will be the meniscus formation and identifying measures for reducing the co-extracted electrons, in particular for deuterium operation. The recent results of the 3D PIC code ONIX (calculation domain: one extraction aperture and its vicinity) for the ITER prototype source (1/8 size of the ITER NBI source) are presented.

  14. Iterative nonlinear joint transform correlation for the detection of objects in cluttered scenes

    NASA Astrophysics Data System (ADS)

    Haist, Tobias; Tiziani, Hans J.

    1999-03-01

    An iterative correlation technique with digital image processing in the feedback loop for the detection of small objects in cluttered scenes is proposed. A scanning aperture is combined with the method in order to improve the immunity against noise and clutter. Multiple reference objects or different views of one object are processed in parallel. We demonstrate the method by detecting a noisy and distorted face in a crowd with a nonlinear joint transform correlator.

  15. Scientific and technical challenges on the road towards fusion electricity

    NASA Astrophysics Data System (ADS)

    Donné, A. J. H.; Federici, G.; Litaudon, X.; McDonald, D. C.

    2017-10-01

    The goal of the European Fusion Roadmap is to deliver fusion electricity to the grid early in the second half of this century. It breaks the quest for fusion energy into eight missions, and for each of them it describes a research and development programme to address all the open technical gaps in physics and technology and estimates the required resources. It points out the needs to intensify industrial involvement and to seek all opportunities for collaboration outside Europe. The roadmap covers three periods: the short term, which runs parallel to the European Research Framework Programme Horizon 2020, the medium term and the long term. ITER is the key facility of the roadmap as it is expected to achieve most of the important milestones on the path to fusion power. Thus, the vast majority of present resources are dedicated to ITER and its accompanying experiments. The medium term is focussed on taking ITER into operation and bringing it to full power, as well as on preparing the construction of a demonstration power plant DEMO, which will for the first time demonstrate fusion electricity to the grid around the middle of this century. Building and operating DEMO is the subject of the last roadmap phase: the long term. Clearly, the Fusion Roadmap is tightly connected to the ITER schedule. Three key milestones are the first operation of ITER, the start of the DT operation in ITER and reaching the full performance at which the thermal fusion power is 10 times the power put in to the plasma. The Engineering Design Activity of DEMO needs to start a few years after the first ITER plasma, while the start of the construction phase will be a few years after ITER reaches full performance. In this way ITER can give viable input to the design and development of DEMO. Because the neutron fluence in DEMO will be much higher than in ITER, it is important to develop and validate materials that can handle these very high neutron loads. For the testing of the materials, a dedicated 14 MeV neutron source is needed. This DEMO Oriented Neutron Source (DONES) is therefore an important facility to support the fusion roadmap.

  16. Estimation of brood and nest survival: Comparative methods in the presence of heterogeneity

    USGS Publications Warehouse

    Manly, Bryan F.J.; Schmutz, Joel A.

    2001-01-01

    The Mayfield method has been widely used for estimating survival of nests and young animals, especially when data are collected at irregular observation intervals. However, this method assumes survival is constant throughout the study period, which often ignores biologically relevant variation and may lead to biased survival estimates. We examined the bias and accuracy of 1 modification to the Mayfield method that allows for temporal variation in survival, and we developed and similarly tested 2 additional methods. One of these 2 new methods is simply an iterative extension of Klett and Johnson's method, which we refer to as the Iterative Mayfield method and bears similarity to Kaplan-Meier methods. The other method uses maximum likelihood techniques for estimation and is best applied to survival of animals in groups or families, rather than as independent individuals. We also examined how robust these estimators are to heterogeneity in the data, which can arise from such sources as dependent survival probabilities among siblings, inherent differences among families, and adoption. Testing of estimator performance with respect to bias, accuracy, and heterogeneity was done using simulations that mimicked a study of survival of emperor goose (Chen canagica) goslings. Assuming constant survival for inappropriately long periods of time or use of Klett and Johnson's methods resulted in large bias or poor accuracy (often >5% bias or root mean square error) compared to our Iterative Mayfield or maximum likelihood methods. Overall, estimator performance was slightly better with our Iterative Mayfield than our maximum likelihood method, but the maximum likelihood method provides a more rigorous framework for testing covariates and explicity models a heterogeneity factor. We demonstrated use of all estimators with data from emperor goose goslings. We advocate that future studies use the new methods outlined here rather than the traditional Mayfield method or its previous modifications.

  17. Development of two-channel prototype ITER vacuum ultraviolet spectrometer with back-illuminated charge-coupled device and microchannel plate detectors.

    PubMed

    Seon, C R; Choi, S H; Cheon, M S; Pak, S; Lee, H G; Biel, W; Barnsley, R

    2010-10-01

    A vacuum ultraviolet (VUV) spectrometer of a five-channel spectral system is designed for ITER main plasma impurity measurement. To develop and verify the system design, a two-channel prototype system is fabricated with No. 3 (14.4-31.8 nm) and No. 4 (29.0-60.0 nm) among the five channels. The optical system consists of a collimating mirror to collect the light from source to slit, two holographic diffraction gratings with toroidal geometry, and two different electronic detectors. For the test of the prototype system, a hollow cathode lamp is used as a light source. To find the appropriate detector for ITER VUV system, two kinds of detectors of the back-illuminated charge-coupled device and the microchannel plate electron multiplier are tested, and their performance has been investigated.

  18. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  19. Carbon fiber composites application in ITER plasma facing components

    NASA Astrophysics Data System (ADS)

    Barabash, V.; Akiba, M.; Bonal, J. P.; Federici, G.; Matera, R.; Nakamura, K.; Pacher, H. D.; Rödig, M.; Vieider, G.; Wu, C. H.

    1998-10-01

    Carbon Fiber Composites (CFCs) are one of the candidate armour materials for the plasma facing components of the International Thermonuclear Experimental Reactor (ITER). For the present reference design, CFC has been selected as armour for the divertor target near the plasma strike point mainly because of unique resistance to high normal and off-normal heat loads. It does not melt under disruptions and might have higher erosion lifetime in comparison with other possible armour materials. Issues related to CFC application in ITER are described in this paper. They include erosion lifetime, tritium codeposition with eroded material and possible methods for the removal of the codeposited layers, neutron irradiation effect, development of joining technologies with heat sink materials, and thermomechanical performance. The status of the development of new advanced CFCs for ITER application is also described. Finally, the remaining R&D needs are critically discussed.

  20. Reliable recovery of the optical properties of multi-layer turbid media by iteratively using a layered diffusion model at multiple source-detector separations

    PubMed Central

    Liao, Yu-Kai; Tseng, Sheng-Hao

    2014-01-01

    Accurately determining the optical properties of multi-layer turbid media using a layered diffusion model is often a difficult task and could be an ill-posed problem. In this study, an iterative algorithm was proposed for solving such problems. This algorithm employed a layered diffusion model to calculate the optical properties of a layered sample at several source-detector separations (SDSs). The optical properties determined at various SDSs were mutually referenced to complete one round of iteration and the optical properties were gradually revised in further iterations until a set of stable optical properties was obtained. We evaluated the performance of the proposed method using frequency domain Monte Carlo simulations and found that the method could robustly recover the layered sample properties with various layer thickness and optical property settings. It is expected that this algorithm can work with photon transport models in frequency and time domain for various applications, such as determination of subcutaneous fat or muscle optical properties and monitoring the hemodynamics of muscle. PMID:24688828

  1. Sound source identification and sound radiation modeling in a moving medium using the time-domain equivalent source method.

    PubMed

    Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang

    2015-05-01

    Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.

  2. Very low-dose (0.15 mGy) chest CT protocols using the COPDGene 2 test object and a third-generation dual-source CT scanner with corresponding third-generation iterative reconstruction software.

    PubMed

    Newell, John D; Fuld, Matthew K; Allmendinger, Thomas; Sieren, Jered P; Chan, Kung-Sik; Guo, Junfeng; Hoffman, Eric A

    2015-01-01

    The purpose of this study was to evaluate the impact of ultralow radiation dose single-energy computed tomographic (CT) acquisitions with Sn prefiltration and third-generation iterative reconstruction on density-based quantitative measures of growing interest in phenotyping pulmonary disease. The effects of both decreasing dose and different body habitus on the accuracy of the mean CT attenuation measurements and the level of image noise (SD) were evaluated using the COPDGene 2 test object, containing 8 different materials of interest ranging from air to acrylic and including various density foams. A third-generation dual-source multidetector CT scanner (Siemens SOMATOM FORCE; Siemens Healthcare AG, Erlangen, Germany) running advanced modeled iterative reconstruction (ADMIRE) software (Siemens Healthcare AG) was used.We used normal and very large body habitus rings at dose levels varying from 1.5 to 0.15 mGy using a spectral-shaped (0.6-mm Sn) tube output of 100 kV(p). Three CT scans were obtained at each dose level using both rings. Regions of interest for each material in the test object scans were automatically extracted. The Hounsfield unit values of each material using weighted filtered back projection (WFBP) at 1.5 mGy was used as the reference value to evaluate shifts in CT attenuation at lower dose levels using either WFBP or ADMIRE. Statistical analysis included basic statistics, Welch t tests, multivariable covariant model using the F test to assess the significance of the explanatory (independent) variables on the response (dependent) variable, and CT mean attenuation, in the multivariable covariant model including reconstruction method. Multivariable regression analysis of the mean CT attenuation values showed a significant difference with decreasing dose between ADMIRE and WFBP. The ADMIRE has reduced noise and more stable CT attenuation compared with WFBP. There was a strong effect on the mean CT attenuation values of the scanned materials for ring size (P < 0.0001) and dose level (P < 0.0001). The number of voxels in the region of interest for the particular material studied did not demonstrate a significant effect (P > 0.05). The SD was lower with ADMIRE compared with WFBP at all dose levels and ring sizes (P < 0.05). The third-generation dual-source CT scanners using third-generation iterative reconstruction methods can acquire accurate quantitative CT images with acceptable image noise at very low-dose levels (0.15 mGy). This opens up new diagnostic and research opportunities in CT phenotyping of the lung for developing new treatments and increased understanding of pulmonary disease.

  3. Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy.

    PubMed

    Zelyak, O; Fallone, B G; St-Aubin, J

    2017-12-14

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.

  4. Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Zelyak, O.; Fallone, B. G.; St-Aubin, J.

    2018-01-01

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.

  5. Corrigendum to "Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy".

    PubMed

    Zelyak, Oleksandr; Fallone, B Gino; St-Aubin, Joel

    2018-03-12

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation. © 2018 Institute of Physics and Engineering in Medicine.

  6. Determination of an effective scoring function for RNA-RNA interactions with a physics-based double-iterative method.

    PubMed

    Yan, Yumeng; Wen, Zeyu; Zhang, Di; Huang, Sheng-You

    2018-05-18

    RNA-RNA interactions play fundamental roles in gene and cell regulation. Therefore, accurate prediction of RNA-RNA interactions is critical to determine their complex structures and understand the molecular mechanism of the interactions. Here, we have developed a physics-based double-iterative strategy to determine the effective potentials for RNA-RNA interactions based on a training set of 97 diverse RNA-RNA complexes. The double-iterative strategy circumvented the reference state problem in knowledge-based scoring functions by updating the potentials through iteration and also overcame the decoy-dependent limitation in previous iterative methods by constructing the decoys iteratively. The derived scoring function, which is referred to as DITScoreRR, was evaluated on an RNA-RNA docking benchmark of 60 test cases and compared with three other scoring functions. It was shown that for bound docking, our scoring function DITScoreRR obtained the excellent success rates of 90% and 98.3% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 63.3% and 71.7% for van der Waals interactions, 45.0% and 65.0% for ITScorePP, and 11.7% and 26.7% for ZDOCK 2.1, respectively. For unbound docking, DITScoreRR achieved the good success rates of 53.3% and 71.7% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 13.3% and 28.3% for van der Waals interactions, 11.7% and 26.7% for our ITScorePP, and 3.3% and 6.7% for ZDOCK 2.1, respectively. DITScoreRR also performed significantly better in ranking decoys and obtained significantly higher score-RMSD correlations than the other three scoring functions. DITScoreRR will be of great value for the prediction and design of RNA structures and RNA-RNA complexes.

  7. Data-driven model reference control of MIMO vertical tank systems with model-free VRFT and Q-Learning.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian

    2018-02-01

    This paper proposes a combined Virtual Reference Feedback Tuning-Q-learning model-free control approach, which tunes nonlinear static state feedback controllers to achieve output model reference tracking in an optimal control framework. The novel iterative Batch Fitted Q-learning strategy uses two neural networks to represent the value function (critic) and the controller (actor), and it is referred to as a mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach. Learning convergence of the Q-learning schemes generally depends, among other settings, on the efficient exploration of the state-action space. Handcrafting test signals for efficient exploration is difficult even for input-output stable unknown processes. Virtual Reference Feedback Tuning can ensure an initial stabilizing controller to be learned from few input-output data and it can be next used to collect substantially more input-state data in a controlled mode, in a constrained environment, by compensating the process dynamics. This data is used to learn significantly superior nonlinear state feedback neural networks controllers for model reference tracking, using the proposed Batch Fitted Q-learning iterative tuning strategy, motivating the original combination of the two techniques. The mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach is experimentally validated for water level control of a multi input-multi output nonlinear constrained coupled two-tank system. Discussions on the observed control behavior are offered. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Using In-Training Evaluation Report (ITER) Qualitative Comments to Assess Medical Students and Residents: A Systematic Review.

    PubMed

    Hatala, Rose; Sawatsky, Adam P; Dudek, Nancy; Ginsburg, Shiphra; Cook, David A

    2017-06-01

    In-training evaluation reports (ITERs) constitute an integral component of medical student and postgraduate physician trainee (resident) assessment. ITER narrative comments have received less attention than the numeric scores. The authors sought both to determine what validity evidence informs the use of narrative comments from ITERs for assessing medical students and residents and to identify evidence gaps. Reviewers searched for relevant English-language studies in MEDLINE, EMBASE, Scopus, and ERIC (last search June 5, 2015), and in reference lists and author files. They included all original studies that evaluated ITERs for qualitative assessment of medical students and residents. Working in duplicate, they selected articles for inclusion, evaluated quality, and abstracted information on validity evidence using Kane's framework (inferences of scoring, generalization, extrapolation, and implications). Of 777 potential articles, 22 met inclusion criteria. The scoring inference is supported by studies showing that rich narratives are possible, that changing the prompt can stimulate more robust narratives, and that comments vary by context. Generalization is supported by studies showing that narratives reach thematic saturation and that analysts make consistent judgments. Extrapolation is supported by favorable relationships between ITER narratives and numeric scores from ITERs and non-ITER performance measures, and by studies confirming that narratives reflect constructs deemed important in clinical work. Evidence supporting implications is scant. The use of ITER narratives for trainee assessment is generally supported, except that evidence is lacking for implications and decisions. Future research should seek to confirm implicit assumptions and evaluate the impact of decisions.

  9. Model-based iterative reconstruction and adaptive statistical iterative reconstruction: dose-reduced CT for detecting pancreatic calcification.

    PubMed

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2016-01-01

    Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67-0.89) compared to L-ASIR or UL-ASIR (0.11-0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818-0.860) was comparable to that for L-ASIR (0.696-0.844). The specificity was lower with UL-MBIR (0.79-0.92) than with L-ASIR or UL-ASIR (0.96-0.99), and a significant difference was seen for one reader (P < 0.01). In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity.

  10. The PRIMA Test Facility: SPIDER and MITICA test-beds for ITER neutral beam injectors

    NASA Astrophysics Data System (ADS)

    Toigo, V.; Piovan, R.; Dal Bello, S.; Gaio, E.; Luchetta, A.; Pasqualotto, R.; Zaccaria, P.; Bigi, M.; Chitarin, G.; Marcuzzi, D.; Pomaro, N.; Serianni, G.; Agostinetti, P.; Agostini, M.; Antoni, V.; Aprile, D.; Baltador, C.; Barbisan, M.; Battistella, M.; Boldrin, M.; Brombin, M.; Dalla Palma, M.; De Lorenzi, A.; Delogu, R.; De Muri, M.; Fellin, F.; Ferro, A.; Fiorentin, A.; Gambetta, G.; Gnesotto, F.; Grando, L.; Jain, P.; Maistrello, A.; Manduchi, G.; Marconato, N.; Moresco, M.; Ocello, E.; Pavei, M.; Peruzzo, S.; Pilan, N.; Pimazzoni, A.; Recchia, M.; Rizzolo, A.; Rostagni, G.; Sartori, E.; Siragusa, M.; Sonato, P.; Sottocornola, A.; Spada, E.; Spagnolo, S.; Spolaore, M.; Taliercio, C.; Valente, M.; Veltri, P.; Zamengo, A.; Zaniol, B.; Zanotto, L.; Zaupa, M.; Boilson, D.; Graceffa, J.; Svensson, L.; Schunke, B.; Decamps, H.; Urbani, M.; Kushwah, M.; Chareyre, J.; Singh, M.; Bonicelli, T.; Agarici, G.; Garbuglia, A.; Masiello, A.; Paolucci, F.; Simon, M.; Bailly-Maitre, L.; Bragulat, E.; Gomez, G.; Gutierrez, D.; Mico, G.; Moreno, J.-F.; Pilard, V.; Kashiwagi, M.; Hanada, M.; Tobari, H.; Watanabe, K.; Maejima, T.; Kojima, A.; Umeda, N.; Yamanaka, H.; Chakraborty, A.; Baruah, U.; Rotti, C.; Patel, H.; Nagaraju, M. V.; Singh, N. P.; Patel, A.; Dhola, H.; Raval, B.; Fantz, U.; Heinemann, B.; Kraus, W.; Hanke, S.; Hauer, V.; Ochoa, S.; Blatchford, P.; Chuilon, B.; Xue, Y.; De Esch, H. P. L.; Hemsworth, R.; Croci, G.; Gorini, G.; Rebai, M.; Muraro, A.; Tardocchi, M.; Cavenago, M.; D'Arienzo, M.; Sandri, S.; Tonti, A.

    2017-08-01

    The ITER Neutral Beam Test Facility (NBTF), called PRIMA (Padova Research on ITER Megavolt Accelerator), is hosted in Padova, Italy and includes two experiments: MITICA, the full-scale prototype of the ITER heating neutral beam injector, and SPIDER, the full-size radio frequency negative-ions source. The NBTF realization and the exploitation of SPIDER and MITICA have been recognized as necessary to make the future operation of the ITER heating neutral beam injectors efficient and reliable, fundamental to the achievement of thermonuclear-relevant plasma parameters in ITER. This paper reports on design and R&D carried out to construct PRIMA, SPIDER and MITICA, and highlights the huge progress made in just a few years, from the signature of the agreement for the NBTF realization in 2011, up to now—when the buildings and relevant infrastructures have been completed, SPIDER is entering the integrated commissioning phase and the procurements of several MITICA components are at a well advanced stage.

  11. Iterative color-multiplexed, electro-optical processor.

    PubMed

    Psaltis, D; Casasent, D; Carlotto, M

    1979-11-01

    A noncoherent optical vector-matrix multiplier using a linear LED source array and a linear P-I-N photodiode detector array has been combined with a 1-D adder in a feedback loop. The resultant iterative optical processor and its use in solving simultaneous linear equations are described. Operation on complex data is provided by a novel color-multiplexing system.

  12. Rater variables associated with ITER ratings.

    PubMed

    Paget, Michael; Wu, Caren; McIlwrick, Joann; Woloschuk, Wayne; Wright, Bruce; McLaughlin, Kevin

    2013-10-01

    Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of Calgary and preceptors who completed online ITERs between February 2008 and July 2009. Our outcome variable was global rating on the ITER (rated 1-5), and we used a generalized estimating equation model to identify variables associated with this rating. Students were rated "above expected level" or "outstanding" on 66.4 % of 1050 online ITERs completed during the study period. Two rater variables attenuated ITER ratings: the log transformed time taken to complete the ITER [β = -0.06, 95 % confidence interval (-0.10, -0.02), p = 0.002], and the number of ITERs that a preceptor completed over the time period of the study [β = -0.008 (-0.02, -0.001), p = 0.02]. In this study we found evidence of leniency bias that resulted in two thirds of students being rated above expected level of performance. This leniency bias appeared to be attenuated by delay in ITER completion, and was also blunted in preceptors who rated more students. As all biases threaten the internal validity of the assessment process, further research is needed to confirm these and other sources of rater bias in ITER ratings, and to explore ways of limiting their impact.

  13. Effects of ray profile modeling on resolution recovery in clinical CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofmann, Christian; Knaup, Michael; Kachelrieß, Marc, E-mail: marc.kachelriess@dkfz-heidelberg.de

    2014-02-15

    Purpose: Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. However, among vendors and researchers, there is no consensus of how to best achieve these goals. The authors are focusing on the aspect of geometric ray profile modeling, which is realized by some algorithms, while others model the ray as a straight line. The authors incorporate ray-modeling (RM) in nonregularized iterative reconstruction. That means, instead of using one simple single needle beam to represent the x-ray, the authors evaluatemore » the double integral of attenuation path length over the finite source distribution and the finite detector element size in the numerical forward projection. Our investigations aim at analyzing the resolution recovery (RR) effects of RM. Resolution recovery means that frequencies can be recovered beyond the resolution limit of the imaging system. In order to evaluate, whether clinical CT images can benefit from modeling the geometrical properties of each x-ray, the authors performed a 2D simulation study of a clinical CT fan-beam geometry that includes the precise modeling of these geometrical properties. Methods: All simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and a Forbild thorax phantom with circular resolution patterns representing calcifications in the heart region are simulated. An FBP reconstruction with a Ram–Lak kernel is used as a reference reconstruction. The FBP is compared to iterative reconstruction techniques with and without RM: An ordered subsets convex (OSC) algorithm without any RM (OSC), an OSC where the forward projection is modeled concerning the finite focal spot and detector size (OSC-RM) and an OSC with RM and with a matched forward and backprojection pair (OSC-T-RM, T for transpose). In all cases, noise was matched to be able to focus on comparing spatial resolution. The authors use two different simulation settings. Both are based on the geometry of a typical clinical CT system (0.7 mm detector element size at isocenter, 1024 projections per rotation). Setting one has an exaggerated source width of 5.0 mm. Setting two has a realistically small source width of 0.5 mm. The authors also investigate the transition from setting one to two. To quantify image quality, the authors analyze line profiles through the resolution patterns to define a contrast factor (CF) for contrast-resolution plots, and the authors compare the normalized cross-correlation (NCC) with respect to the ground truth of the circular resolution patterns. To independently analyze whether RM is of advantage, the authors implemented several iterative reconstruction algorithms: The statistical iterative reconstruction algorithm OSC, the ordered subsets simultaneous algebraic reconstruction technique (OSSART) and another statistical iterative reconstruction algorithm, denoted with ordered subsets maximum likelihood (OSML) algorithm. All algorithms were implemented both without RM (denoted as OSC, OSSART, and OSML) and with RM (denoted as OSC-RM, OSSART-RM, and OSML-RM). Results: For the unrealistic case of a 5.0 mm focal spot the CF can be improved by a factor of two due to RM: the 4.2 LP/cm bar pattern, which is the first bar pattern that cannot be resolved without RM, can be easily resolved with RM. For the realistic case of a 0.5 mm focus, all results show approximately the same CF. The NCC shows no significant dependency on RM when the source width is smaller than 2.0 mm (as in clinical CT). From 2.0 mm to 5.0 mm focal spot size increasing improvements can be observed with RM. Conclusions: Geometric RM in iterative reconstruction helps improving spatial resolution, if the ray cross-section is significantly larger than the ray sampling distance. In clinical CT, however, the ray is not much thicker than the distance between neighboring ray centers, as the focal spot size is small and detector crosstalk is negligible, due to reflective coatings between detector elements. Therefore,RM appears not to be necessary in clinical CT to achieve resolution recovery.« less

  14. Deblending of simultaneous-source data using iterative seislet frame thresholding based on a robust slope estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Yatong; Han, Chunying; Chi, Yue

    2018-06-01

    In a simultaneous source survey, no limitation is required for the shot scheduling of nearby sources and thus a huge acquisition efficiency can be obtained but at the same time making the recorded seismic data contaminated by strong blending interference. In this paper, we propose a multi-dip seislet frame based sparse inversion algorithm to iteratively separate simultaneous sources. We overcome two inherent drawbacks of traditional seislet transform. For the multi-dip problem, we propose to apply a multi-dip seislet frame thresholding strategy instead of the traditional seislet transform for deblending simultaneous-source data that contains multiple dips, e.g., containing multiple reflections. The multi-dip seislet frame strategy solves the conflicting dip problem that degrades the performance of the traditional seislet transform. For the noise issue, we propose to use a robust dip estimation algorithm that is based on velocity-slope transformation. Instead of calculating the local slope directly using the plane-wave destruction (PWD) based method, we first apply NMO-based velocity analysis and obtain NMO velocities for multi-dip components that correspond to multiples of different orders, then a fairly accurate slope estimation can be obtained using the velocity-slope conversion equation. An iterative deblending framework is given and validated through a comprehensive analysis over both numerical synthetic and field data examples.

  15. Application of an iterative least-squares waveform inversion of strong-motion and teleseismic records to the 1978 Tabas, Iran, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Mendoza, C.

    1991-01-01

    An iterative least-squares technique is used to simultaneously invert the strong-motion records and teleseismic P waveforms for the 1978 Tabas, Iran, earthquake to deduce the rupture history. The effects of using different data sets and different parametrizations of the problem (linear versus nonlinear) are considered. A consensus of all the inversion runs indicates a complex, multiple source for the Tabas earthquake, with four main source regions over a fault length of 90 km and an average rupture velocity of 2.5 km/sec. -from Authors

  16. Breast ultrasound computed tomography using waveform inversion with source encoding

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.

    2015-03-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.

  17. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  18. Development progresses of radio frequency ion source for neutral beam injector in fusion devices.

    PubMed

    Chang, D H; Jeong, S H; Kim, T S; Park, M; Lee, K W; In, S R

    2014-02-01

    A large-area RF (radio frequency)-driven ion source is being developed in Germany for the heating and current drive of an ITER device. Negative hydrogen ion sources are the major components of neutral beam injection systems in future large-scale fusion experiments such as ITER and DEMO. RF ion sources for the production of positive hydrogen (deuterium) ions have been successfully developed for the neutral beam heating systems at IPP (Max-Planck-Institute for Plasma Physics) in Germany. The first long-pulse ion source has been developed successfully with a magnetic bucket plasma generator including a filament heating structure for the first NBI system of the KSTAR tokamak. There is a development plan for an RF ion source at KAERI to extract the positive ions, which can be applied for the KSTAR NBI system and to extract the negative ions for future fusion devices such as the Fusion Neutron Source and Korea-DEMO. The characteristics of RF-driven plasmas and the uniformity of the plasma parameters in the test-RF ion source were investigated initially using an electrostatic probe.

  19. A Gauss-Seidel Iteration Scheme for Reference-Free 3-D Histological Image Reconstruction

    PubMed Central

    Daum, Volker; Steidl, Stefan; Maier, Andreas; Köstler, Harald; Hornegger, Joachim

    2015-01-01

    Three-dimensional (3-D) reconstruction of histological slice sequences offers great benefits in the investigation of different morphologies. It features very high-resolution which is still unmatched by in-vivo 3-D imaging modalities, and tissue staining further enhances visibility and contrast. One important step during reconstruction is the reversal of slice deformations introduced during histological slice preparation, a process also called image unwarping. Most methods use an external reference, or rely on conservative stopping criteria during the unwarping optimization to prevent straightening of naturally curved morphology. Our approach shows that the problem of unwarping is based on the superposition of low-frequency anatomy and high-frequency errors. We present an iterative scheme that transfers the ideas of the Gauss-Seidel method to image stacks to separate the anatomy from the deformation. In particular, the scheme is universally applicable without restriction to a specific unwarping method, and uses no external reference. The deformation artifacts are effectively reduced in the resulting histology volumes, while the natural curvature of the anatomy is preserved. The validity of our method is shown on synthetic data, simulated histology data using a CT data set and real histology data. In the case of the simulated histology where the ground truth was known, the mean Target Registration Error (TRE) between the unwarped and original volume could be reduced to less than 1 pixel on average after 6 iterations of our proposed method. PMID:25312918

  20. Can sinogram-affirmed iterative (SAFIRE) reconstruction improve imaging quality on low-dose lung CT screening compared with traditional filtered back projection (FBP) reconstruction?

    PubMed

    Yang, Wen Jie; Yan, Fu Hua; Liu, Bo; Pang, Li Fang; Hou, Liang; Zhang, Huan; Pan, Zi Lai; Chen, Ke Min

    2013-01-01

    To evaluate the performance of sinogram-affirmed iterative (SAFIRE) reconstruction on image quality of low-dose lung computed tomographic (CT) screening compared with filtered back projection (FBP). Three hundred four patients for annual low-dose lung CT screening were examined by a dual-source CT system at 120 kilovolt (peak) with reference tube current of 40 mA·s. Six image serials were reconstructed, including one data set of FBP and 5 data sets of SAFIRE with different reconstruction strengths from 1 to 5. Image noise was recorded; and subjective scores of image noise, images artifacts, and the overall image quality were also assessed by 2 radiologists. The mean ± SD weight for all patients was 66.3 ± 12.8 kg, and the body mass index was 23.4 ± 3.2. The mean ± SD dose-length product was 95.2 ± 30.6 mGy cm, and the mean ± SD effective dose was 1.6 ± 0.5 mSv. The observation agreements for image noise grade, artifact grade, and the overall image quality were 0.785, 0.595 and 0.512, respectively. Among the overall 6 data sets, both the measured mean objective image noise and the subjective image noise of FBP was the highest, and the image noise decreased with the increasing of SAFIRE reconstruction strength. The data sets of S3 obtained the best image quality scores. Sinogram-affirmed iterative reconstruction can significantly improve image quality of low-dose lung CT screening compared with FBP, and SAFIRE with reconstruction strength 3 was a pertinent choice for low-dose lung CT.

  1. Iterative deblending of simultaneous-source data using a coherency-pass shaping operator

    NASA Astrophysics Data System (ADS)

    Zu, Shaohuan; Zhou, Hui; Mao, Weijian; Zhang, Dong; Li, Chao; Pan, Xiao; Chen, Yangkang

    2017-10-01

    Simultaneous-source acquisition helps greatly boost an economic saving, while it brings an unprecedented challenge of removing the crosstalk interference in the recorded seismic data. In this paper, we propose a novel iterative method to separate the simultaneous source data based on a coherency-pass shaping operator. The coherency-pass filter is used to constrain the model, that is, the unblended data to be estimated, in the shaping regularization framework. In the simultaneous source survey, the incoherent interference from adjacent shots greatly increases the rank of the frequency domain Hankel matrix that is formed from the blended record. Thus, the method based on rank reduction is capable of separating the blended record to some extent. However, the shortcoming is that it may cause residual noise when there is strong blending interference. We propose to cascade the rank reduction and thresholding operators to deal with this issue. In the initial iterations, we adopt a small rank to severely separate the blended interference and a large thresholding value as strong constraints to remove the residual noise in the time domain. In the later iterations, since more and more events have been recovered, we weaken the constraint by increasing the rank and shrinking the threshold to recover weak events and to guarantee the convergence. In this way, the combined rank reduction and thresholding strategy acts as a coherency-pass filter, which only passes the coherent high-amplitude component after rank reduction instead of passing both signal and noise in traditional rank reduction based approaches. Two synthetic examples are tested to demonstrate the performance of the proposed method. In addition, the application on two field data sets (common receiver gathers and stacked profiles) further validate the effectiveness of the proposed method.

  2. Development of two color laser diagnostics for the ITER poloidal polarimeter.

    PubMed

    Kawahata, K; Akiyama, T; Tanaka, K; Nakayama, K; Okajima, S

    2010-10-01

    Two color laser diagnostics using terahertz laser sources are under development for a high performance operation of the Large Helical Device and for future fusion devices such as ITER. So far, we have achieved high power laser oscillation lines simultaneously oscillating at 57.2 and 47.7 μm by using a twin optically pumped CH(3)OD laser, and confirmed the original function, compensation of mechanical vibration, of the two color laser interferometer. In this article, application of the two color laser diagnostics to the ITER poloidal polarimeter and recent hardware developments will be described.

  3. Finite-fault source inversion using adjoint methods in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia

    2018-04-01

    Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.

  4. Finite-fault source inversion using adjoint methods in 3-D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia

    2018-07-01

    Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.

  5. EDITORIAL: ECRH physics and technology in ITER

    NASA Astrophysics Data System (ADS)

    Luce, T. C.

    2008-05-01

    It is a great pleasure to introduce you to this special issue containing papers from the 4th IAEA Technical Meeting on ECRH Physics and Technology in ITER, which was held 6-8 June 2007 at the IAEA Headquarters in Vienna, Austria. The meeting was attended by more than 40 ECRH experts representing 13 countries and the IAEA. Presentations given at the meeting were placed into five separate categories EC wave physics: current understanding and extrapolation to ITER Application of EC waves to confinement and stability studies, including active control techniques for ITER Transmission systems/launchers: state of the art and ITER relevant techniques Gyrotron development towards ITER needs System integration and optimisation for ITER. It is notable that the participants took seriously the focal point of ITER, rather than simply contributing presentations on general EC physics and technology. The application of EC waves to ITER presents new challenges not faced in the current generation of experiments from both the physics and technology viewpoints. High electron temperatures and the nuclear environment have a significant impact on the application of EC waves. The needs of ITER have also strongly motivated source and launcher development. Finally, the demonstrated ability for precision control of instabilities or non-inductive current drive in addition to bulk heating to fusion burn has secured a key role for EC wave systems in ITER. All of the participants were encouraged to submit their contributions to this special issue, subject to the normal publication and technical merit standards of Nuclear Fusion. Almost half of the participants chose to do so; many of the others had been published in other publications and therefore could not be included in this special issue. The papers included here are a representative sample of the meeting. The International Advisory Committee also asked the three summary speakers from the meeting to supply brief written summaries (O. Sauter: EC wave physics and applications, M. Thumm: Source and transmission line development, and S. Cirant: ITER specific system designs). These summaries are included in this issue to give a more complete view of the technical meeting. Finally, it is appropriate to mention the future of this meeting series. With the ratification of the ITER agreement and the formation of the ITER International Organization, it was recognized that meetings conducted by outside agencies with an exclusive focus on ITER would be somewhat unusual. However, the participants at this meeting felt that the gathering of international experts with diverse specialities within EC wave physics and technology to focus on using EC waves in future fusion devices like ITER was extremely valuable. It was therefore recommended that this series of meetings continue, but with the broader focus on the application of EC waves to steady-state and burning plasma experiments including demonstration power plants. As the papers in this special issue show, the EC community is already taking seriously the challenges of applying EC waves to fusion devices with high neutron fluence and continuous operation at high reliability.

  6. Noniterative Multireference Coupled Cluster Methods on Heterogeneous CPU-GPU Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaskaran-Nair, Kiran; Ma, Wenjing; Krishnamoorthy, Sriram

    2013-04-09

    A novel parallel algorithm for non-iterative multireference coupled cluster (MRCC) theories, which merges recently introduced reference-level parallelism (RLP) [K. Bhaskaran-Nair, J.Brabec, E. Aprà, H.J.J. van Dam, J. Pittner, K. Kowalski, J. Chem. Phys. 137, 094112 (2012)] with the possibility of accelerating numerical calculations using graphics processing unit (GPU) is presented. We discuss the performance of this algorithm on the example of the MRCCSD(T) method (iterative singles and doubles and perturbative triples), where the corrections due to triples are added to the diagonal elements of the MRCCSD (iterative singles and doubles) effective Hamiltonian matrix. The performance of the combined RLP/GPU algorithmmore » is illustrated on the example of the Brillouin-Wigner (BW) and Mukherjee (Mk) state-specific MRCCSD(T) formulations.« less

  7. Towards a realistic 3D simulation of the extraction region in ITER NBI relevant ion source

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Fantz, U.; Franzen, P.; Minea, T.

    2015-03-01

    The development of negative ion (NI) sources for ITER is strongly accompanied by modelling activities. The ONIX code addresses the physics of formation and extraction of negative hydrogen ions at caesiated sources as well as the amount of co-extracted electrons. In order to be closer to the experimental conditions the code has been improved. It includes now the bias potential applied to first grid (plasma grid) of the extraction system, and the presence of Cs+ ions in the plasma. The simulation results show that such aspects play an important role for the formation of an ion-ion plasma in the boundary region by reducing the depth of the negative potential well in vicinity to the plasma grid that limits the extraction of the NIs produced at the Cs covered plasma grid surface. The influence of the initial temperature of the surface produced NI and its emission rate on the NI density in the bulk plasma that in turn affects the beam formation region was analysed. The formation of the plasma meniscus, the boundary between the plasma and the beam, was investigated for the extraction potentials of 5 and 10 kV. At the smaller extraction potential the meniscus moves closer to the plasma grid but as in the case of 10 kV the deepest meniscus bend point is still outside of the aperture. Finally, a plasma containing the same amount of NI and electrons (nH- =ne =1017 m-3) , representing good source conditioning, was simulated. It is shown that at such conditions the extracted NI current can reach values of ˜32 mA cm-2 using ITER-relevant extraction potential of 10 kV and ˜19 mA cm-2 at 5 kV. These results are in good agreement with experimental measurements performed at the small scale ITER prototype source at the test facility BATMAN.

  8. Indian Test Facility (INTF) and its updates

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, M.; Chakraborty, A.; Rotti, C.; Joshi, J.; Patel, H.; Yadav, A.; Shah, S.; Tyagi, H.; Parmar, D.; Sudhir, Dass; Gahlaut, A.; Bansal, G.; Soni, J.; Pandya, K.; Pandey, R.; Yadav, R.; Nagaraju, M. V.; Mahesh, V.; Pillai, S.; Sharma, D.; Singh, D.; Bhuyan, M.; Mistry, H.; Parmar, K.; Patel, M.; Patel, K.; Prajapati, B.; Shishangiya, H.; Vishnudev, M.; Bhagora, J.

    2017-04-01

    To characterize ITER Diagnostic Neutral Beam (DNB) system with full specification and to support IPR’s negative ion beam based neutral beam injector (NBI) system development program, a R&D facility, named INTF is under commissioning phase. Implementation of a successful DNB at ITER requires several challenges need to be overcome. These issues are related to the negative ion production, its neutralization and corresponding neutral beam transport over the path lengths of ∼ 20.67 m to reach ITER plasma. DNB is a procurement package for INDIA, as an in-kind contribution to ITER. Since ITER is considered as a nuclear facility, minimum diagnostic systems, linked with safe operation of the machine are planned to be incorporated in it and so there is difficulty to characterize DNB after onsite commissioning. Therefore, the delivery of DNB to ITER will be benefited if DNB is operated and characterized prior to onsite commissioning. INTF has been envisaged to be operational with the large size ion source activities in the similar timeline, as with the SPIDER (RFX, Padova) facility. This paper describes some of the development updates of the facility.

  9. Operation of large RF sources for H-: Lessons learned at ELISE

    NASA Astrophysics Data System (ADS)

    Fantz, U.; Wünderlich, D.; Heinemann, B.; Kraus, W.; Riedl, R.

    2017-08-01

    The goal of the ELISE test facility is to demonstrate that large RF-driven negative ion sources (1 × 1 m2 source area with 360 kW installed RF power) can achieve the parameters required for the ITER beam sources in terms of current densities and beam homogeneity at a filling pressure of 0.3 Pa for pulse lengths of up to one hour. With the experience in operation of the test facility, the beam source inspection and maintenance as well as with the results of the achieved source performance so far, conclusions are drawn for commissioning and operation of the ITER beam sources. Addressed are critical technical RF issues, extrapolations to the required RF power, Cs consumption and Cs ovens, the need of adjusting the magnetic filter field strength as well as the temporal dynamic and spatial asymmetry of the co-extracted electron current. It is proposed to relax the low pressure limit to 0.4 Pa and to replace the fixed electron-to-ion ratio by a power density limit for the extraction grid. This would be highly beneficial for controlling the co-extracted electrons.

  10. Performance analysis of Rogowski coils and the measurement of the total toroidal current in the ITER machine

    NASA Astrophysics Data System (ADS)

    Quercia, A.; Albanese, R.; Fresa, R.; Minucci, S.; Arshad, S.; Vayakis, G.

    2017-12-01

    The paper carries out a comprehensive study of the performances of Rogowski coils. It describes methodologies that were developed in order to assess the capabilities of the Continuous External Rogowski (CER), which measures the total toroidal current in the ITER machine. Even though the paper mainly considers the CER, the contents are general and relevant to any Rogowski sensor. The CER consists of two concentric helical coils which are wound along a complex closed path. Modelling and computational activities were performed to quantify the measurement errors, taking detailed account of the ITER environment. The geometrical complexity of the sensor is accurately accounted for and the standard model which provides the classical expression to compute the flux linkage of Rogowski sensors is quantitatively validated. Then, in order to take into account the non-ideality of the winding, a generalized expression, formally analogue to the classical one, is presented. Models to determine the worst case and the statistical measurement accuracies are hence provided. The following sources of error are considered: effect of the joints, disturbances due to external sources of field (the currents flowing in the poloidal field coils and the ferromagnetic inserts of ITER), deviations from ideal geometry, toroidal field variations, calibration, noise and integration drift. The proposed methods are applied to the measurement error of the CER, in particular in its high and low operating ranges, as prescribed by the ITER system design description documents, and during transients, which highlight the large time constant related to the shielding of the vacuum vessel. The analyses presented in the paper show that the design of the CER diagnostic is capable of achieving the requisite performance as needed for the operation of the ITER machine.

  11. Viscous and Interacting Flow Field Effects.

    DTIC Science & Technology

    1980-06-01

    in the inviscid flow analysis using free vortex sheets whose shapes are determined by iteration. The outer iteration employs boundary layer...Methods, Inc. which replaces the source distribution in the separation zone by a vortex wake model . This model is described in some detail in (2), but...in the potential flow is obtained using linearly varying vortex singularities distributed on planar panels. The wake is represented by sheets of

  12. Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media

    NASA Astrophysics Data System (ADS)

    Jakobsen, Morten; Tveit, Svenn

    2018-05-01

    We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.

  13. Computer Modeling of High-Intensity Cs-Sputter Ion Sources

    NASA Astrophysics Data System (ADS)

    Brown, T. A.; Roberts, M. L.; Southon, J. R.

    The grid-point mesh program NEDLab has been used to computer model the interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS), with the goal of improving negative ion output. NEDLab has several features that are important to realistic modeling of such sources. First, space-charge effects are incorporated in the calculations through an automated ion-trajectories/Poissonelectric-fields successive-iteration process. Second, space charge distributions can be averaged over successive iterations to suppress model instabilities. Third, space charge constraints on ion emission from surfaces can be incorporate under Child's Law based algorithms. Fourth, the energy of ions emitted from a surface can be randomly chosen from within a thermal energy distribution. And finally, ions can be emitted from a surface at randomized angles The results of our modeling effort indicate that significant modification of the interior geometry of the source will double Cs+ ion production from our spherical ionizer and produce a significant increase in negative ion output from the source.

  14. External heating and current drive source requirements towards steady-state operation in ITER

    NASA Astrophysics Data System (ADS)

    Poli, F. M.; Kessel, C. E.; Bonoli, P. T.; Batchelor, D. B.; Harvey, R. W.; Snyder, P. B.

    2014-07-01

    Steady state scenarios envisaged for ITER aim at optimizing the bootstrap current, while maintaining sufficient confinement and stability to provide the necessary fusion yield. Non-inductive scenarios will need to operate with internal transport barriers (ITBs) in order to reach adequate fusion gain at typical currents of 9 MA. However, the large pressure gradients associated with ITBs in regions of weak or negative magnetic shear can be conducive to ideal MHD instabilities, reducing the no-wall limit. The E × B flow shear from toroidal plasma rotation is expected to be low in ITER, with a major role in the ITB dynamics being played by magnetic geometry. Combinations of heating and current drive (H/CD) sources that sustain reversed magnetic shear profiles throughout the discharge are the focus of this work. Time-dependent transport simulations indicate that a combination of electron cyclotron (EC) and lower hybrid (LH) waves is a promising route towards steady state operation in ITER. The LH forms and sustains expanded barriers and the EC deposition at mid-radius freezes the bootstrap current profile stabilizing the barrier and leading to confinement levels 50% higher than typical H-mode energy confinement times. Using LH spectra with spectrum centred on parallel refractive index of 1.75-1.85, the performance of these plasma scenarios is close to the ITER target of 9 MA non-inductive current, global confinement gain H98 = 1.6 and fusion gain Q = 5.

  15. Calibration of ITER Instant Power Neutron Monitors: Recommended Scenario of Experiments at the Reactor

    NASA Astrophysics Data System (ADS)

    Borisov, A. A.; Deryabina, N. A.; Markovskij, D. V.

    2017-12-01

    Instant power is a key parameter of the ITER. Its monitoring with an accuracy of a few percent is an urgent and challenging aspect of neutron diagnostics. In a series of works published in Problems of Atomic Science and Technology, Series: Thermonuclear Fusion under a common title, the step-by-step neutronics analysis was given to substantiate a calibration technique for the DT and DD modes of the ITER. A Gauss quadrature scheme, optimal for processing "expensive" experiments, is used for numerical integration of 235U and 238U detector responses to the point sources of 14-MeV neutrons. This approach allows controlling the integration accuracy in relation to the number of coordinate mesh points and thus minimizing the number of irradiations at the given uncertainty of the full monitor response. In the previous works, responses of the divertor and blanket monitors to the isotropic point sources of DT and DD neutrons in the plasma profile and to the models of real sources were calculated within the ITER model using the MCNP code. The neutronics analyses have allowed formulating the basic principles of calibration that are optimal for having the maximum accuracy at the minimum duration of in situ experiments at the reactor. In this work, scenarios of the preliminary and basic experimental ITER runs are suggested on the basis of those principles. It is proposed to calibrate the monitors only with DT neutrons and use correction factors to the DT mode calibration for the DD mode. It is reasonable to perform full calibration only with 235U chambers and calibrate 238U chambers by responses of the 235U chambers during reactor operation (cross-calibration). The divertor monitor can be calibrated using both direct measurement of responses at the Gauss positions of a point source and simplified techniques based on the concepts of equivalent ring sources and inverse response distributions, which will considerably reduce the amount of measurements. It is shown that the monitor based on the average responses of the horizontal and vertical neutron chambers remains spatially stable as the source moves and can be used in addition to the staff monitor at neutron fluxes in the detectors four orders of magnitude lower than on the first wall, where staff detectors are located. Owing to low background, detectors of neutron chambers do not need calibration in the reactor because it is actually determination of the absolute detector efficiency for 14-MeV neutrons, which is a routine out-of-reactor procedure.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hess, E.; Hamilton, D.

    The purpose of this ITER is to chronicle the development of the ROST (trademark), its capabilities, associated equipment, and accessories. The report concludes with an evaluation of how closely the results obtained using the technology compare to the results obtained using the reference methods.

  17. Unsteady flow model for circulation-control airfoils

    NASA Technical Reports Server (NTRS)

    Rao, B. M.

    1979-01-01

    An analysis and a numerical lifting surface method are developed for predicting the unsteady airloads on two-dimensional circulation control airfoils in incompressible flow. The analysis and the computer program are validated by correlating the computed unsteady airloads with test data and also with other theoretical solutions. Additionally, a mathematical model for predicting the bending-torsion flutter of a two-dimensional airfoil (a reference section of a wing or rotor blade) and a computer program using an iterative scheme are developed. The flutter program has a provision for using the CC airfoil airloads program or the Theodorsen hard flap solution to compute the unsteady lift and moment used in the flutter equations. The adopted mathematical model and the iterative scheme are used to perform a flutter analysis of a typical CC rotor blade reference section. The program seems to work well within the basic assumption of the incompressible flow.

  18. Systems and methods for predicting materials properties

    DOEpatents

    Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano

    2007-11-06

    Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.

  19. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  20. Mission of ITER and Challenges for the Young

    NASA Astrophysics Data System (ADS)

    Ikeda, Kaname

    2009-02-01

    It is recognized that the ongoing effort to provide sufficient energy for the wellbeing of the globe's population and to power the world economy is of the greatest importance. ITER is a joint international research and development project that aims to demonstrate the scientific and technical feasibility of fusion power. It represents the responsible actions of governments whose countries comprise over half the world's population, to create fusion power as a source of clean, economic, carbon dioxide-free energy. This is the most important science initiative of our time. The partners in the Project—the ITER Parties—are the European Union, Japan, the People's Republic of China, India, the Republic of Korea, the Russian Federation and the USA. ITER will be constructed in Europe, at Cadarache in the South of France. The talk will illustrate the genesis of the ITER Organization, the ongoing work at the Cadarache site and the planned schedule for construction. There will also be an explanation of the unique aspects of international collaboration that have been developed for ITER. Although the present focus of the project is construction activities, ITER is also a major scientific and technological research program, for which the best of the world's intellectual resources is needed. Challenges for the young, imperative for fulfillment of the objective of ITER will be identified. It is important that young students and researchers worldwide recognize the rapid development of the project, and the fundamental issues that must be overcome in ITER. The talk will also cover the exciting career and fellowship opportunities for young people at the ITER Organization.

  1. Mission of ITER and Challenges for the Young

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ikeda, Kaname

    2009-02-19

    It is recognized that the ongoing effort to provide sufficient energy for the wellbeing of the globe's population and to power the world economy is of the greatest importance. ITER is a joint international research and development project that aims to demonstrate the scientific and technical feasibility of fusion power. It represents the responsible actions of governments whose countries comprise over half the world's population, to create fusion power as a source of clean, economic, carbon dioxide-free energy. This is the most important science initiative of our time.The partners in the Project--the ITER Parties--are the European Union, Japan, the People'smore » Republic of China, India, the Republic of Korea, the Russian Federation and the USA. ITER will be constructed in Europe, at Cadarache in the South of France. The talk will illustrate the genesis of the ITER Organization, the ongoing work at the Cadarache site and the planned schedule for construction. There will also be an explanation of the unique aspects of international collaboration that have been developed for ITER.Although the present focus of the project is construction activities, ITER is also a major scientific and technological research program, for which the best of the world's intellectual resources is needed. Challenges for the young, imperative for fulfillment of the objective of ITER will be identified. It is important that young students and researchers worldwide recognize the rapid development of the project, and the fundamental issues that must be overcome in ITER.The talk will also cover the exciting career and fellowship opportunities for young people at the ITER Organization.« less

  2. CuCrZr alloy microstructure and mechanical properties after hot isostatic pressing bonding cycles

    NASA Astrophysics Data System (ADS)

    Frayssines, P.-E.; Gentzbittel, J.-M.; Guilloud, A.; Bucci, P.; Soreau, T.; Francois, N.; Primaux, F.; Heikkinen, S.; Zacchia, F.; Eaton, R.; Barabash, V.; Mitteau, R.

    2014-04-01

    ITER first wall (FW) panels are a layered structure made of the three following materials: 316L(N) austenitic stainless steel, CuCrZr alloy and beryllium. Two hot isostatic pressing (HIP) cycles are included in the reference fabrication route to bond these materials together for the normal heat flux design supplied by the European Union (EU). This reference fabrication route ensures sufficiently good mechanical properties for the materials and joints, which fulfil the ITER mechanical specifications, but often results in a coarse grain size for the CuCrZr alloy, which is not favourable, especially, for the thermal creep properties of the FW panels. To limit the abnormal grain growth of CuCrZr and make the ITER FW fabrication route more reliable, a study began in 2010 in the EU in the frame of an ITER task agreement. Two material fabrication approaches have been investigated. The first one was dedicated to the fabrication of solid CuCrZr alloy in close collaboration with an industrial copper alloys manufacturer. The second approach investigated was the manufacturing of CuCrZr alloy using the powder metallurgy (PM) route and HIP consolidation. This paper presents the main mechanical and microstructural results associated with the two CuCrZr approaches mentioned above. The mechanical properties of solid CuCrZr, PM CuCrZr and joints (solid CuCrZr/solid CuCrZr and solid CuCrZr/316L(N) and PM CuCrZr/316L(N)) are also presented.

  3. A response to Yu et al. "A forward-backward fragment assembling algorithm for the identification of genomic amplification and deletion breakpoints using high-density single nucleotide polymorphism (SNP) array", BMC Bioinformatics 2007, 8: 145.

    PubMed

    Rueda, Oscar M; Diaz-Uriarte, Ramon

    2007-10-16

    Yu et al. (BMC Bioinformatics 2007,8: 145+) have recently compared the performance of several methods for the detection of genomic amplification and deletion breakpoints using data from high-density single nucleotide polymorphism arrays. One of the methods compared is our non-homogenous Hidden Markov Model approach. Our approach uses Markov Chain Monte Carlo for inference, but Yu et al. ran the sampler for a severely insufficient number of iterations for a Markov Chain Monte Carlo-based method. Moreover, they did not use the appropriate reference level for the non-altered state. We rerun the analysis in Yu et al. using appropriate settings for both the Markov Chain Monte Carlo iterations and the reference level. Additionally, to show how easy it is to obtain answers to additional specific questions, we have added a new analysis targeted specifically to the detection of breakpoints. The reanalysis shows that the performance of our method is comparable to that of the other methods analyzed. In addition, we can provide probabilities of a given spot being a breakpoint, something unique among the methods examined. Markov Chain Monte Carlo methods require using a sufficient number of iterations before they can be assumed to yield samples from the distribution of interest. Running our method with too small a number of iterations cannot be representative of its performance. Moreover, our analysis shows how our original approach can be easily adapted to answer specific additional questions (e.g., identify edges).

  4. Outlier detection for particle image velocimetry data using a locally estimated noise variance

    NASA Astrophysics Data System (ADS)

    Lee, Yong; Yang, Hua; Yin, ZhouPing

    2017-03-01

    This work describes an adaptive spatial variable threshold outlier detection algorithm for raw gridded particle image velocimetry data using a locally estimated noise variance. This method is an iterative procedure, and each iteration is composed of a reference vector field reconstruction step and an outlier detection step. We construct the reference vector field using a weighted adaptive smoothing method (Garcia 2010 Comput. Stat. Data Anal. 54 1167-78), and the weights are determined in the outlier detection step using a modified outlier detector (Ma et al 2014 IEEE Trans. Image Process. 23 1706-21). A hard decision on the final weights of the iteration can produce outlier labels of the field. The technical contribution is that the spatial variable threshold motivation is embedded in the modified outlier detector with a locally estimated noise variance in an iterative framework for the first time. It turns out that a spatial variable threshold is preferable to a single spatial constant threshold in complicated flows such as vortex flows or turbulent flows. Synthetic cellular vortical flows with simulated scattered or clustered outliers are adopted to evaluate the performance of our proposed method in comparison with popular validation approaches. This method also turns out to be beneficial in a real PIV measurement of turbulent flow. The experimental results demonstrated that the proposed method yields the competitive performance in terms of outlier under-detection count and over-detection count. In addition, the outlier detection method is computational efficient and adaptive, requires no user-defined parameters, and corresponding implementations are also provided in supplementary materials.

  5. The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.

    PubMed

    Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre

    2016-10-01

    Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.

  6. Transport synthetic acceleration with opposing reflecting boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zika, M.R.; Adams, M.L.

    2000-02-01

    The transport synthetic acceleration (TSA) scheme is extended to problems with opposing reflecting boundary conditions. This synthetic method employs a simplified transport operator as its low-order approximation. A procedure is developed that allows the use of the conjugate gradient (CG) method to solve the resulting low-order system of equations. Several well-known transport iteration algorithms are cast in a linear algebraic form to show their equivalence to standard iterative techniques. Source iteration in the presence of opposing reflecting boundary conditions is shown to be equivalent to a (poorly) preconditioned stationary Richardson iteration, with the preconditioner defined by the method of iteratingmore » on the incident fluxes on the reflecting boundaries. The TSA method (and any synthetic method) amounts to a further preconditioning of the Richardson iteration. The presence of opposing reflecting boundary conditions requires special consideration when developing a procedure to realize the CG method for the proposed system of equations. The CG iteration may be applied only to symmetric positive definite matrices; this condition requires the algebraic elimination of the boundary angular corrections from the low-order equations. As a consequence of this elimination, evaluating the action of the resulting matrix on an arbitrary vector involves two transport sweeps and a transmission iteration. Results of applying the acceleration scheme to a simple test problem are presented.« less

  7. Improved Regression Analysis of Temperature-Dependent Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2015-01-01

    An improved approach is discussed that may be used to directly include first and second order temperature effects in the load prediction algorithm of a wind tunnel strain-gage balance. The improved approach was designed for the Iterative Method that fits strain-gage outputs as a function of calibration loads and uses a load iteration scheme during the wind tunnel test to predict loads from measured gage outputs. The improved approach assumes that the strain-gage balance is at a constant uniform temperature when it is calibrated and used. First, the method introduces a new independent variable for the regression analysis of the balance calibration data. The new variable is designed as the difference between the uniform temperature of the balance and a global reference temperature. This reference temperature should be the primary calibration temperature of the balance so that, if needed, a tare load iteration can be performed. Then, two temperature{dependent terms are included in the regression models of the gage outputs. They are the temperature difference itself and the square of the temperature difference. Simulated temperature{dependent data obtained from Triumph Aerospace's 2013 calibration of NASA's ARC-30K five component semi{span balance is used to illustrate the application of the improved approach.

  8. Flexible Method for Developing Tactics, Techniques, and Procedures for Future Capabilities

    DTIC Science & Technology

    2009-02-01

    levels of ability, military experience, and motivation, (b) number and type of significant events, and (c) other sources of natural variability...research has developed a number of specific instruments designed to aid in this process. Second, the iterative, feed-forward nature of the method allows...FLEX method), but still lack the structured KE approach and iterative, feed-forward nature of the FLEX method. To facilitate decision making

  9. Structure-based coarse-graining for inhomogeneous liquid polymer systems.

    PubMed

    Fukuda, Motoo; Zhang, Hedong; Ishiguro, Takahiro; Fukuzawa, Kenji; Itoh, Shintaro

    2013-08-07

    The iterative Boltzmann inversion (IBI) method is used to derive interaction potentials for coarse-grained (CG) systems by matching structural properties of a reference atomistic system. However, because it depends on such thermodynamic conditions as density and pressure of the reference system, the derived CG nonbonded potential is probably not applicable to inhomogeneous systems containing different density regimes. In this paper, we propose a structure-based coarse-graining scheme to devise CG nonbonded potentials that are applicable to different density bulk systems and inhomogeneous systems with interfaces. Similar to the IBI, the radial distribution function (RDF) of a reference atomistic bulk system is used for iteratively refining the CG nonbonded potential. In contrast to the IBI, however, our scheme employs an appropriately estimated initial guess and a small amount of refinement to suppress transfer of the many-body interaction effects included in the reference RDF into the CG nonbonded potential. To demonstrate the application of our approach to inhomogeneous systems, we perform coarse-graining for a liquid perfluoropolyether (PFPE) film coated on a carbon surface. The constructed CG PFPE model favorably reproduces structural and density distribution functions, not only for bulk systems, but also at the liquid-vacuum and liquid-solid interfaces, demonstrating that our CG scheme offers an easy and practical way to accurately determine nonbonded potentials for inhomogeneous systems.

  10. Model-based iterative reconstruction and adaptive statistical iterative reconstruction: dose-reduced CT for detecting pancreatic calcification

    PubMed Central

    Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2016-01-01

    Background Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). Purpose To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). Material and Methods This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Results Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67–0.89) compared to L-ASIR or UL-ASIR (0.11–0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818–0.860) was comparable to that for L-ASIR (0.696–0.844). The specificity was lower with UL-MBIR (0.79–0.92) than with L-ASIR or UL-ASIR (0.96–0.99), and a significant difference was seen for one reader (P < 0.01). Conclusion In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity. PMID:27110389

  11. An Iterative Method for Problems with Multiscale Conductivity

    PubMed Central

    Kim, Hyea Hyun; Minhas, Atul S.; Woo, Eung Je

    2012-01-01

    A model with its conductivity varying highly across a very thin layer will be considered. It is related to a stable phantom model, which is invented to generate a certain apparent conductivity inside a region surrounded by a thin cylinder with holes. The thin cylinder is an insulator and both inside and outside the thin cylinderare filled with the same saline. The injected current can enter only through the holes adopted to the thin cylinder. The model has a high contrast of conductivity discontinuity across the thin cylinder and the thickness of the layer and the size of holes are very small compared to the domain of the model problem. Numerical methods for such a model require a very fine mesh near the thin layer to resolve the conductivity discontinuity. In this work, an efficient numerical method for such a model problem is proposed by employing a uniform mesh, which need not resolve the conductivity discontinuity. The discrete problem is then solved by an iterative method, where the solution is improved by solving a simple discrete problem with a uniform conductivity. At each iteration, the right-hand side is updated by integrating the previous iterate over the thin cylinder. This process results in a certain smoothing effect on microscopic structures and our discrete model can provide a more practical tool for simulating the apparent conductivity. The convergence of the iterative method is analyzed regarding the contrast in the conductivity and the relative thickness of the layer. In numerical experiments, solutions of our method are compared to reference solutions obtained from COMSOL, where very fine meshes are used to resolve the conductivity discontinuity in the model. Errors of the voltage in L2 norm follow O(h) asymptotically and the current density matches quitewell those from the reference solution for a sufficiently small mesh size h. The experimental results present a promising feature of our approach for simulating the apparent conductivity related to changes in microscopic cellular structures. PMID:23304238

  12. Virtual fringe projection system with nonparallel illumination based on iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian

    2017-06-01

    Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements.

  13. A BMI-adjusted ultra-low-dose CT angiography protocol for the peripheral arteries-Image quality, diagnostic accuracy and radiation exposure.

    PubMed

    Schreiner, Markus M; Platzgummer, Hannes; Unterhumer, Sylvia; Weber, Michael; Mistelbauer, Gabriel; Loewe, Christian; Schernthaner, Ruediger E

    2017-08-01

    To investigate radiation exposure, objective image quality, and the diagnostic accuracy of a BMI-adjusted ultra-low-dose CT angiography (CTA) protocol for the assessment of peripheral arterial disease (PAD), with digital subtraction angiography (DSA) as the standard of reference. In this prospective, IRB-approved study, 40 PAD patients (30 male, mean age 72 years) underwent CTA on a dual-source CT scanner at 80kV tube voltage. The reference amplitude for tube current modulation was personalized based on the body mass index (BMI) with 120 mAs for [BMI≤25] or 150 mAs for [2570%) was assessed by two readers independently and compared to subsequent DSA. Radiation exposure was assessed with the computed tomography dose index (CTDIvol) and the dosis-length product (DLP). Objective image quality was assessed via contrast- and signal-to-noise ratio (CNR and SNR) measurements. Radiation exposure and image quality were compared between the BMI groups and between the BMI-adjusted ultra-low-dose protocol and the low-dose institutional standard protocol (ISP). The BMI-adjusted ultra-low-dose protocol reached high diagnostic accuracy values of 94% for Reader 1 and 93% for Reader 2. Moreover, in comparison to the ISP, it showed significantly (p<0.001) lower CTDIvol (1.97±0.55mGy vs. 4.18±0.62 mGy) and DLP (256±81mGy x cm vs. 544±83mGy x cm) but similar image quality (p=0.37 for CNR). Furthermore, image quality was similar between BMI groups (p=0.86 for CNR). A CT protocol that incorporates low kV settings with a personalized (BMI-adjusted) reference amplitude for tube current modulation and iterative reconstruction enables very low radiation exposure CTA, while maintaining good image quality and high diagnostic accuracy in the assessment of PAD. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Inverse source problems in elastodynamics

    NASA Astrophysics Data System (ADS)

    Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao

    2018-04-01

    We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.

  15. Measurement of the complex transmittance of large optical elements with Ptychographical Iterative Engine.

    PubMed

    Wang, Hai-Yan; Liu, Cheng; Veetil, Suhas P; Pan, Xing-Chen; Zhu, Jian-Qiang

    2014-01-27

    Wavefront control is a significant parameter in inertial confinement fusion (ICF). The complex transmittance of large optical elements which are often used in ICF is obtained by computing the phase difference of the illuminating and transmitting fields using Ptychographical Iterative Engine (PIE). This can accurately and effectively measure the transmittance of large optical elements with irregular surface profiles, which are otherwise not measurable using commonly used interferometric techniques due to a lack of standard reference plate. Experiments are done with a Continue Phase Plate (CPP) to illustrate the feasibility of this method.

  16. Implementing the Gaia Astrometric Global Iterative Solution (AGIS) in Java

    NASA Astrophysics Data System (ADS)

    O'Mullane, William; Lammers, Uwe; Lindegren, Lennart; Hernandez, Jose; Hobbs, David

    2011-10-01

    This paper provides a description of the Java software framework which has been constructed to run the Astrometric Global Iterative Solution for the Gaia mission. This is the mathematical framework to provide the rigid reference frame for Gaia observations from the Gaia data itself. This process makes Gaia a self calibrated, and input catalogue independent, mission. The framework is highly distributed typically running on a cluster of machines with a database back end. All code is written in the Java language. We describe the overall architecture and some of the details of the implementation.

  17. Reconstruction of instantaneous surface normal velocity of a vibrating structure using interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Geng, Lin; Bi, Chuan-Xing; Xie, Feng; Zhang, Xiao-Zheng

    2018-07-01

    Interpolated time-domain equivalent source method is extended to reconstruct the instantaneous surface normal velocity of a vibrating structure by using the time-evolving particle velocity as the input, which provides a non-contact way to overall understand the instantaneous vibration behavior of the structure. In this method, the time-evolving particle velocity in the near field is first modeled by a set of equivalent sources positioned inside the vibrating structure, and then the integrals of equivalent source strengths are solved by an iterative solving process and are further used to calculate the instantaneous surface normal velocity. An experiment of a semi-cylindrical steel plate impacted by a steel ball is investigated to examine the ability of the extended method, where the time-evolving normal particle velocity and pressure on the hologram surface measured by a Microflown pressure-velocity probe are used as the inputs of the extended method and the method based on pressure measurements, respectively, and the instantaneous surface normal velocity of the plate measured by a laser Doppler vibrometry is used as the reference for comparison. The experimental results demonstrate that the extended method is a powerful tool to visualize the instantaneous surface normal velocity of a vibrating structure in both time and space domains and can obtain more accurate results than that of the method based on pressure measurements.

  18. ITER ECE Diagnostic: Design Progress of IN-DA and the diagnostic role for Physics

    NASA Astrophysics Data System (ADS)

    Pandya, H. K. B.; Kumar, Ravinder; Danani, S.; Shrishail, P.; Thomas, Sajal; Kumar, Vinay; Taylor, G.; Khodak, A.; Rowan, W. L.; Houshmandyar, S.; Udintsev, V. S.; Casal, N.; Walsh, M. J.

    2017-04-01

    The ECE Diagnostic system in ITER will be used for measuring the electron temperature profile evolution, electron temperature fluctuations, the runaway electron spectrum, and the radiated power in the electron cyclotron frequency range (70-1000 GHz), These measurements will be used for advanced real time plasma control (e.g. steering the electron cyclotron heating beams), and physics studies. The scope of the Indian Domestic Agency (IN-DA) is to design and develop the polarizer splitter units; the broadband (70 to 1000 GHz) transmission lines; a high temperature calibration source in the Diagnostics Hall; two Michelson Interferometers (70 to 1000 GHz) and a 122-230 GHz radiometer. The remainder of the ITER ECE diagnostic system is the responsibility of the US domestic agency and the ITER Organization (IO). The design needs to conform to the ITER Organization’s strict requirements for reliability, availability, maintainability and inspect-ability. Progress in the design and development of various subsystems and components considering various engineering challenges and solutions will be discussed in this paper. This paper will also highlight how various ECE measurements can enhance understanding of plasma physics in ITER.

  19. Iterative weighting of multiblock data in the orthogonal partial least squares framework.

    PubMed

    Boccard, Julien; Rutledge, Douglas N

    2014-02-27

    The integration of multiple data sources has emerged as a pivotal aspect to assess complex systems comprehensively. This new paradigm requires the ability to separate common and redundant from specific and complementary information during the joint analysis of several data blocks. However, inherent problems encountered when analysing single tables are amplified with the generation of multiblock datasets. Finding the relationships between data layers of increasing complexity constitutes therefore a challenging task. In the present work, an algorithm is proposed for the supervised analysis of multiblock data structures. It associates the advantages of interpretability from the orthogonal partial least squares (OPLS) framework and the ability of common component and specific weights analysis (CCSWA) to weight each data table individually in order to grasp its specificities and handle efficiently the different sources of Y-orthogonal variation. Three applications are proposed for illustration purposes. A first example refers to a quantitative structure-activity relationship study aiming to predict the binding affinity of flavonoids toward the P-glycoprotein based on physicochemical properties. A second application concerns the integration of several groups of sensory attributes for overall quality assessment of a series of red wines. A third case study highlights the ability of the method to combine very large heterogeneous data blocks from Omics experiments in systems biology. Results were compared to the reference multiblock partial least squares (MBPLS) method to assess the performance of the proposed algorithm in terms of predictive ability and model interpretability. In all cases, ComDim-OPLS was demonstrated as a relevant data mining strategy for the simultaneous analysis of multiblock structures by accounting for specific variation sources in each dataset and providing a balance between predictive and descriptive purpose. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Closed loop adaptive optics for microscopy without a wavefront sensor.

    PubMed

    Kner, Peter; Winoto, Lukman; Agard, David A; Sedat, John W

    2010-02-24

    A three-dimensional wide-field image of a small fluorescent bead contains more than enough information to accurately calculate the wavefront in the microscope objective back pupil plane using the phase retrieval technique. The phase-retrieved wavefront can then be used to set a deformable mirror to correct the point-spread function (PSF) of the microscope without the use of a wavefront sensor. This technique will be useful for aligning the deformable mirror in a widefield microscope with adaptive optics and could potentially be used to correct aberrations in samples where small fluorescent beads or other point sources are used as reference beacons. Another advantage is the high resolution of the retrieved wavefont as compared with current Shack-Hartmann wavefront sensors. Here we demonstrate effective correction of the PSF in 3 iterations. Starting from a severely aberrated system, we achieve a Strehl ratio of 0.78 and a greater than 10-fold increase in maximum intensity.

  1. Improving Retrieval Performance by Relevance Feedback.

    ERIC Educational Resources Information Center

    Salton, Gerard; Buckley, Chris

    1990-01-01

    Briefly describes the principal relevance feedback methods that have been introduced over the years and evaluates the effectiveness of the methods in producing improved query formulations. Prescriptions are given for conducting text retrieval operations iteratively using relevance feedback. (24 references) (Author/CLB)

  2. Development of the Long Pulse Negative Ion Source for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemsworth, R.S.; Svensson, L.; Esch, H.P.L. de

    2005-04-06

    A model of the ion source designed for the neutral beam injectors of the International Thermonuclear Experimental Reactor (ITER), the KAMABOKO III ion source, is being tested on the MANTIS test stand at the DRFC Cadarache in collaboration with JAERI, Japan, who designed and supplied the ion source. The ion source is attached to a 3 grid 30 keV accelerator (also supplied by JAERI) and the accelerated negative ion current is determined from the energy deposited on a calorimeter located 1.6 m from the source.During experiments on MANTIS three adverse effects of long pulse operation were found: The negative ionmore » current to the calorimeter is {approx_equal}50% of that obtained from short pulse operation Increasing the plasma grid (PG) temperature results in {<=}40% enhancement in negative ion yield, substantially below that reported for short pulse operation, {>=}100%. The caesium 'consumption' is up to 1500 times that expected.Results presented here indicate that each of these is, at least partially, explained by thermal effects. Additionally presented are the results of a detailed characterisation of the source, which enable the most efficient mode of operation to be identified.« less

  3. Ion-source modeling and improved performance of the CAMS high-intensity Cs-sputter ion source

    NASA Astrophysics Data System (ADS)

    Brown, T. A.; Roberts, M. L.; Southon, J. R.

    2000-10-01

    The interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS) has been computer modeled using the program NEDLab, with the aim of improving negative ion output. Space charge effects on ion trajectories within the source were modeled through a successive iteration process involving the calculation of ion trajectories through Poisson-equation-determined electric fields, followed by calculation of modified electric fields incorporating the charge distribution from the previously calculated ion trajectories. The program has several additional features that are useful in ion source modeling: (1) averaging of space charge distributions over successive iterations to suppress instabilities, (2) Child's Law modeling of space charge limited ion emission from surfaces, and (3) emission of particular ion groups with a thermal energy distribution and at randomized angles. The results of the modeling effort indicated that significant modification of the interior geometry of the source would double Cs + ion production from our spherical ionizer and produce a significant increase in negative ion output from the source. The results of the implementation of the new geometry were found to be consistent with the model results.

  4. An iterative sinogram gap-filling method with object- and scanner-dedicated discrete cosine transform (DCT)-domain filters for high resolution PET scanners.

    PubMed

    Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon

    2018-01-01

    We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.

  5. Development of a domain-specific genetic language to design Chlamydomonas reinhardtii expression vectors.

    PubMed

    Wilson, Mandy L; Okumoto, Sakiko; Adam, Laura; Peccoud, Jean

    2014-01-15

    Expression vectors used in different biotechnology applications are designed with domain-specific rules. For instance, promoters, origins of replication or homologous recombination sites are host-specific. Similarly, chromosomal integration or viral delivery of an expression cassette imposes specific structural constraints. As de novo gene synthesis and synthetic biology methods permeate many biotechnology specialties, the design of application-specific expression vectors becomes the new norm. In this context, it is desirable to formalize vector design strategies applicable in different domains. Using the design of constructs to express genes in the chloroplast of Chlamydomonas reinhardtii as an example, we show that a vector design strategy can be formalized as a domain-specific language. We have developed a graphical editor of context-free grammars usable by biologists without prior exposure to language theory. This environment makes it possible for biologists to iteratively improve their design strategies throughout the course of a project. It is also possible to ensure that vectors designed with early iterations of the language are consistent with the latest iteration of the language. The context-free grammar editor is part of the GenoCAD application. A public instance of GenoCAD is available at http://www.genocad.org. GenoCAD source code is available from SourceForge and licensed under the Apache v2.0 open source license.

  6. Validity of linear measurements of the jaws using ultralow-dose MDCT and the iterative techniques of ASIR and MBIR.

    PubMed

    Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig

    2016-10-01

    To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.

  7. A power-efficient communication system between brain-implantable devices and external computers.

    PubMed

    Yao, Ning; Lee, Heung-No; Chang, Cheng-Chun; Sclabassi, Robert J; Sun, Mingui

    2007-01-01

    In this paper, we propose a power efficient communication system for linking a brain-implantable device to an external system. For battery powered implantable devices, the processor and the transmitter power should be reduced in order to both conserve battery power and reduce the health risks associated with transmission. To accomplish this, a joint source-channel coding/decoding system is devised. Low-density generator matrix (LDGM) codes are used in our system due to their low encoding complexity. The power cost for signal processing within the implantable device is greatly reduced by avoiding explicit source encoding. Raw data which is highly correlated is transmitted. At the receiver, a Markov chain source correlation model is utilized to approximate and capture the correlation of raw data. A turbo iterative receiver algorithm is designed which connects the Markov chain source model to the LDGM decoder in a turbo-iterative way. Simulation results show that the proposed system can save up to 1 to 2.5 dB on transmission power.

  8. Development of in-vessel components of the microfission chamber for ITER.

    PubMed

    Ishikawa, M; Kondoh, T; Ookawa, K; Fujita, K; Yamauchi, M; Hayakawa, A; Nishitani, T; Kusama, Y

    2010-10-01

    Microfission chambers (MFCs) will measure the total neutron source strength in ITER. The MFCs will be installed behind blanket modules in the vacuum vessel (VV). Triaxial mineral insulated (MI) cables will carry signals from the MFCs. The joint connecting triaxial MI cables in the VV must be considered because the MFCs and the MI cables will be installed separately at different times. Vacuum tight triaxial connector of the MI cable has been designed and a prototype has been constructed. Performance tests indicate that the connector can be applied to the ITER environment. A small bending-radius test of the MI cable indicates no observed damage at a curvature radius of 100 mm.

  9. Beyond ITER: neutral beams for a demonstration fusion reactor (DEMO) (invited).

    PubMed

    McAdams, R

    2014-02-01

    In the development of magnetically confined fusion as an economically sustainable power source, International Tokamak Experimental Reactor (ITER) is currently under construction. Beyond ITER is the demonstration fusion reactor (DEMO) programme in which the physics and engineering aspects of a future fusion power plant will be demonstrated. DEMO will produce net electrical power. The DEMO programme will be outlined and the role of neutral beams for heating and current drive will be described. In particular, the importance of the efficiency of neutral beam systems in terms of injected neutral beam power compared to wallplug power will be discussed. Options for improving this efficiency including advanced neutralisers and energy recovery are discussed.

  10. Development of in-vessel components of the microfission chamber for ITER1

    PubMed Central

    Ishikawa, M.; Kondoh, T.; Ookawa, K.; Fujita, K.; Yamauchi, M.; Hayakawa, A.; Nishitani, T.; Kusama, Y.

    2010-01-01

    Microfission chambers (MFCs) will measure the total neutron source strength in ITER. The MFCs will be installed behind blanket modules in the vacuum vessel (VV). Triaxial mineral insulated (MI) cables will carry signals from the MFCs. The joint connecting triaxial MI cables in the VV must be considered because the MFCs and the MI cables will be installed separately at different times. Vacuum tight triaxial connector of the MI cable has been designed and a prototype has been constructed. Performance tests indicate that the connector can be applied to the ITER environment. A small bending-radius test of the MI cable indicates no observed damage at a curvature radius of 100 mm. PMID:21033834

  11. Power Transmission From The ITER Model Negative Ion Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boilson, D.; Esch, H. P. L. de; Grand, C.

    2007-08-10

    In Cadarache development on negative ion sources is being carried out on the KAMABOKO III ion source on the MANTIS test bed. This is a model of the ion source designed for the neutral beam injectors of ITER. This ion source has been developed in collaboration with JAERI, Japan, who also designed and supplied the ion source. Its target performance is to accelerate a D- beam, with a current density of 200 A/m2 and <1 electron extracted per accelerated D- ion, at a source filling pressure of 0.3 Pa. For ITER a continuous ion beam must be assured for pulsemore » lengths of 1000 s, but beams of up to 3,600 s are also envisaged. The ion source is attached to a 3 grid 30 keV accelerator (also supplied by JAERI) and the accelerated negative ion current is determined from the energy deposited on a calorimeter. During long pulse operation ({<=}1000 s) it was found that the current density of both D- and H- beams, measured at the calorimeter was lower than expected and that a large discrepancy existed between the accelerated currents measured electrically and those transmitted to the calorimeter. The possibility that this discrepancy arose because the accelerated current included electrons (which would not be able to reach the calorimeter) was investigated and subsequently eliminated. Further studies have shown that the fraction of the electrical current reaching the calorimeter varies with the pulse length, which led to the suggestion that one or more of the accelerator grids were distorting due to the incident power during operation, leading to a progressive deterioration in the beam quality.. New extraction and acceleration grids have been designed and installed, which should have a better tolerance to thermal loads than those previously used. This paper describes the measurements of the power transmission and distribution using these grids.« less

  12. Influence of iterative reconstruction on coronary calcium scores at multiple heart rates: a multivendor phantom study on state-of-the-art CT systems.

    PubMed

    van der Werf, N R; Willemink, M J; Willems, T P; Greuter, M J W; Leiner, T

    2017-12-28

    The objective of this study was to evaluate the influence of iterative reconstruction on coronary calcium scores (CCS) at different heart rates for four state-of-the-art CT systems. Within an anthropomorphic chest phantom, artificial coronary arteries were translated in a water-filled compartment. The arteries contained three different calcifications with low (38 mg), medium (80 mg) and high (157 mg) mass. Linear velocities were applied, corresponding to heart rates of 0, < 60, 60-75 and > 75 bpm. Data were acquired on four state-of-the-art CT systems (CT1-CT4) with routinely used CCS protocols. Filtered back projection (FBP) and three increasing levels of iterative reconstruction (L1-L3) were used for reconstruction. CCS were quantified as Agatston score and mass score. An iterative reconstruction susceptibility (IRS) index was used to assess susceptibility of Agatston score (IRS AS ) and mass score (IRS MS ) to iterative reconstruction. IRS values were compared between CT systems and between calcification masses. For each heart rate, differences in CCS of iterative reconstructed images were evaluated with CCS of FBP images as reference, and indicated as small (< 5%), medium (5-10%) or large (> 10%). Statistical analysis was performed with repeated measures ANOVA tests. While subtle differences were found for Agatston scores of low mass calcification, medium and high mass calcifications showed increased CCS up to 77% with increasing heart rates. IRS AS of CT1-T4 were 17, 41, 130 and 22% higher than IRS MS . Not only were IRS significantly different between all CT systems, but also between calcification masses. Up to a fourfold increase in IRS was found for the low mass calcification in comparison with the high mass calcification. With increasing iterative reconstruction strength, maximum decreases of 21 and 13% for Agatston and mass score were found. In total, 21 large differences between Agatston scores from FBP and iterative reconstruction were found, while only five large differences were found between FBP and iterative reconstruction mass scores. Iterative reconstruction results in reduced CCS. The effect of iterative reconstruction on CCS is more prominent with low-density calcifications, high heart rates and increasing iterative reconstruction strength.

  13. Diagonalization of complex symmetric matrices: Generalized Householder reflections, iterative deflation and implicit shifts

    NASA Astrophysics Data System (ADS)

    Noble, J. H.; Lubasch, M.; Stevens, J.; Jentschura, U. D.

    2017-12-01

    We describe a matrix diagonalization algorithm for complex symmetric (not Hermitian) matrices, A ̲ =A̲T, which is based on a two-step algorithm involving generalized Householder reflections based on the indefinite inner product 〈 u ̲ , v ̲ 〉 ∗ =∑iuivi. This inner product is linear in both arguments and avoids complex conjugation. The complex symmetric input matrix is transformed to tridiagonal form using generalized Householder transformations (first step). An iterative, generalized QL decomposition of the tridiagonal matrix employing an implicit shift converges toward diagonal form (second step). The QL algorithm employs iterative deflation techniques when a machine-precision zero is encountered "prematurely" on the super-/sub-diagonal. The algorithm allows for a reliable and computationally efficient computation of resonance and antiresonance energies which emerge from complex-scaled Hamiltonians, and for the numerical determination of the real energy eigenvalues of pseudo-Hermitian and PT-symmetric Hamilton matrices. Numerical reference values are provided.

  14. Phylogenomic Insights into Mouse Evolution Using a Pseudoreference Approach

    PubMed Central

    Sarver, Brice A.J.; Keeble, Sara; Cosart, Ted; Tucker, Priscilla K.; Dean, Matthew D.

    2017-01-01

    Comparative genomic studies are now possible across a broad range of evolutionary timescales, but the generation and analysis of genomic data across many different species still present a number of challenges. The most sophisticated genotyping and down-stream analytical frameworks are still predominantly based on comparisons to high-quality reference genomes. However, established genomic resources are often limited within a given group of species, necessitating comparisons to divergent reference genomes that could restrict or bias comparisons across a phylogenetic sample. Here, we develop a scalable pseudoreference approach to iteratively incorporate sample-specific variation into a genome reference and reduce the effects of systematic mapping bias in downstream analyses. To characterize this framework, we used targeted capture to sequence whole exomes (∼54 Mbp) in 12 lineages (ten species) of mice spanning the Mus radiation. We generated whole exome pseudoreferences for all species and show that this iterative reference-based approach improved basic genomic analyses that depend on mapping accuracy while preserving the associated annotations of the mouse reference genome. We then use these pseudoreferences to resolve evolutionary relationships among these lineages while accounting for phylogenetic discordance across the genome, contributing an important resource for comparative studies in the mouse system. We also describe patterns of genomic introgression among lineages and compare our results to previous studies. Our general approach can be applied to whole or partitioned genomic data and is easily portable to any system with sufficient genomic resources, providing a useful framework for phylogenomic studies in mice and other taxa. PMID:28338821

  15. Overview of the design of the ITER heating neutral beam injectors

    NASA Astrophysics Data System (ADS)

    Hemsworth, R. S.; Boilson, D.; Blatchford, P.; Dalla Palma, M.; Chitarin, G.; de Esch, H. P. L.; Geli, F.; Dremel, M.; Graceffa, J.; Marcuzzi, D.; Serianni, G.; Shah, D.; Singh, M.; Urbani, M.; Zaccaria, P.

    2017-02-01

    The heating neutral beam injectors (HNBs) of ITER are designed to deliver 16.7 MW of 1 MeV D0 or 0.87 MeV H0 to the ITER plasma for up to 3600 s. They will be the most powerful neutral beam (NB) injectors ever, delivering higher energy NBs to the plasma in a tokamak for longer than any previous systems have done. The design of the HNBs is based on the acceleration and neutralisation of negative ions as the efficiency of conversion of accelerated positive ions is so low at the required energy that a realistic design is not possible, whereas the neutralisation of H- and D- remains acceptable (≈56%). The design of a long pulse negative ion based injector is inherently more complicated than that of short pulse positive ion based injectors because: • negative ions are harder to create so that they can be extracted and accelerated from the ion source; • electrons can be co-extracted from the ion source along with the negative ions, and their acceleration must be minimised to maintain an acceptable overall accelerator efficiency; • negative ions are easily lost by collisions with the background gas in the accelerator; • electrons created in the extractor and accelerator can impinge on the extraction and acceleration grids, leading to high power loads on the grids; • positive ions are created in the accelerator by ionisation of the background gas by the accelerated negative ions and the positive ions are back-accelerated into the ion source creating a massive power load to the ion source; • electrons that are co-accelerated with the negative ions can exit the accelerator and deposit power on various downstream beamline components. The design of the ITER HNBs is further complicated because ITER is a nuclear installation which will generate very large fluxes of neutrons and gamma rays. Consequently all the injector components have to survive in that harsh environment. Additionally the beamline components and the NB cell, where the beams are housed, will be activated and all maintenance will have to be performed remotely. This paper describes the design of the HNB injectors, but not the associated power supplies, cooling system, cryogenic system etc, or the high voltage bushing which separates the vacuum of the beamline from the high pressure SF6 of the high voltage (1 MV) transmission line, through which the power, gas and cooling water are supplied to the beam source. Also the magnetic field reduction system is not described.

  16. Management of Computer-Based Instruction: Design of an Adaptive Control Strategy.

    ERIC Educational Resources Information Center

    Tennyson, Robert D.; Rothen, Wolfgang

    1979-01-01

    Theoretical and research literature on learner, program, and adaptive control as forms of instructional management are critiqued in reference to the design of computer-based instruction. An adaptive control strategy using an online, iterative algorithmic model is proposed. (RAO)

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane; Collins, Benjamin; Kochunas, Brendan

    The MPACT code, being developed collaboratively by the University of Michigan and Oak Ridge National Laboratory, is the primary deterministic neutron transport solver being deployed within the Virtual Environment for Reactor Applications (VERA) as part of the Consortium for Advanced Simulation of Light Water Reactors (CASL). In many applications of the MPACT code, transport-corrected scattering has proven to be an obstacle in terms of stability, and considerable effort has been made to try to resolve the convergence issues that arise from it. Most of the convergence problems seem related to the transport-corrected cross sections, particularly when used in the 2Dmore » method of characteristics (MOC) solver, which is the focus of this work. Here in this paper, the stability and performance of the 2-D MOC solver in MPACT is evaluated for two iteration schemes: Gauss-Seidel and Jacobi. With the Gauss-Seidel approach, as the MOC solver loops over groups, it uses the flux solution from the previous group to construct the inscatter source for the next group. Alternatively, the Jacobi approach uses only the fluxes from the previous outer iteration to determine the inscatter source for each group. Consequently for the Jacobi iteration, the loop over groups can be moved from the outermost loop$-$as is the case with the Gauss-Seidel sweeper$-$to the innermost loop, allowing for a substantial increase in efficiency by minimizing the overhead of retrieving segment, region, and surface index information from the ray tracing data. Several test problems are assessed: (1) Babcock & Wilcox 1810 Core I, (2) Dimple S01A-Sq, (3) VERA Progression Problem 5a, and (4) VERA Problem 2a. The Jacobi iteration exhibits better stability than Gauss-Seidel, allowing for converged solutions to be obtained over a much wider range of iteration control parameters. Additionally, the MOC solve time with the Jacobi approach is roughly 2.0-2.5× faster per sweep. While the performance and stability of the Jacobi iteration are substantially improved compared to the Gauss-Seidel iteration, it does yield a roughly 8$-$10% increase in the overall memory requirement.« less

  18. Improvement of transport-corrected scattering stability and performance using a Jacobi inscatter algorithm for 2D-MOC

    DOE PAGES

    Stimpson, Shane; Collins, Benjamin; Kochunas, Brendan

    2017-03-10

    The MPACT code, being developed collaboratively by the University of Michigan and Oak Ridge National Laboratory, is the primary deterministic neutron transport solver being deployed within the Virtual Environment for Reactor Applications (VERA) as part of the Consortium for Advanced Simulation of Light Water Reactors (CASL). In many applications of the MPACT code, transport-corrected scattering has proven to be an obstacle in terms of stability, and considerable effort has been made to try to resolve the convergence issues that arise from it. Most of the convergence problems seem related to the transport-corrected cross sections, particularly when used in the 2Dmore » method of characteristics (MOC) solver, which is the focus of this work. Here in this paper, the stability and performance of the 2-D MOC solver in MPACT is evaluated for two iteration schemes: Gauss-Seidel and Jacobi. With the Gauss-Seidel approach, as the MOC solver loops over groups, it uses the flux solution from the previous group to construct the inscatter source for the next group. Alternatively, the Jacobi approach uses only the fluxes from the previous outer iteration to determine the inscatter source for each group. Consequently for the Jacobi iteration, the loop over groups can be moved from the outermost loop$-$as is the case with the Gauss-Seidel sweeper$-$to the innermost loop, allowing for a substantial increase in efficiency by minimizing the overhead of retrieving segment, region, and surface index information from the ray tracing data. Several test problems are assessed: (1) Babcock & Wilcox 1810 Core I, (2) Dimple S01A-Sq, (3) VERA Progression Problem 5a, and (4) VERA Problem 2a. The Jacobi iteration exhibits better stability than Gauss-Seidel, allowing for converged solutions to be obtained over a much wider range of iteration control parameters. Additionally, the MOC solve time with the Jacobi approach is roughly 2.0-2.5× faster per sweep. While the performance and stability of the Jacobi iteration are substantially improved compared to the Gauss-Seidel iteration, it does yield a roughly 8$-$10% increase in the overall memory requirement.« less

  19. Modeling and simulation of a beam emission spectroscopy diagnostic for the ITER prototype neutral beam injector.

    PubMed

    Barbisan, M; Zaniol, B; Pasqualotto, R

    2014-11-01

    A test facility for the development of the neutral beam injection system for ITER is under construction at Consorzio RFX. It will host two experiments: SPIDER, a 100 keV H(-)/D(-) ion RF source, and MITICA, a prototype of the full performance ITER injector (1 MV, 17 MW beam). A set of diagnostics will monitor the operation and allow to optimize the performance of the two prototypes. In particular, beam emission spectroscopy will measure the uniformity and the divergence of the fast particles beam exiting the ion source and travelling through the beam line components. This type of measurement is based on the collection of the Hα/Dα emission resulting from the interaction of the energetic particles with the background gas. A numerical model has been developed to simulate the spectrum of the collected emissions in order to design this diagnostic and to study its performance. The paper describes the model at the base of the simulations and presents the modeled Hα spectra in the case of MITICA experiment.

  20. Hybrid cloud and cluster computing paradigms for life science applications

    PubMed Central

    2010-01-01

    Background Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Results Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. Conclusions The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. Methods We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments. PMID:21210982

  1. Hybrid cloud and cluster computing paradigms for life science applications.

    PubMed

    Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey

    2010-12-21

    Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.

  2. Plasma-surface interaction in the Be/W environment: Conclusions drawn from the JET-ILW for ITER

    NASA Astrophysics Data System (ADS)

    Brezinsek, S.; JET-EFDA contributors

    2015-08-01

    The JET ITER-Like Wall experiment (JET-ILW) provides an ideal test bed to investigate plasma-surface interaction (PSI) and plasma operation with the ITER plasma-facing material selection employing beryllium in the main chamber and tungsten in the divertor. The main PSI processes: material erosion and migration, (b) fuel recycling and retention, (c) impurity concentration and radiation have be1en studied and compared between JET-C and JET-ILW. The current physics understanding of these key processes in the JET-ILW revealed that both interpretation of previously obtained carbon results (JET-C) and predictions to ITER need to be revisited. The impact of the first-wall material on the plasma was underestimated. Main observations are: (a) low primary erosion source in H-mode plasmas and reduction of the material migration from the main chamber to the divertor (factor 7) as well as within the divertor from plasma-facing to remote areas (factor 30 - 50). The energetic threshold for beryllium sputtering minimises the primary erosion source and inhibits multi-step re-erosion in the divertor. The physical sputtering yield of tungsten is low as 10-5 and determined by beryllium ions. (b) Reduction of the long-term fuel retention (factor 10 - 20) in JET-ILW with respect to JET-C. The remaining retention is caused by implantation and co-deposition with beryllium and residual impurities. Outgassing has gained importance and impacts on the recycling properties of beryllium and tungsten. (c) The low effective plasma charge (Zeff = 1.2) and low radiation capability of beryllium reveal the bare deuterium plasma physics. Moderate nitrogen seeding, reaching Zeff = 1.6 , restores in particular the confinement and the L-H threshold behaviour. ITER-compatible divertor conditions with stable semi-detachment were obtained owing to a higher density limit with ILW. Overall JET demonstrated successful plasma operation in the Be/W material combination and confirms its advantageous PSI behaviour and gives strong support to the ITER material selection.

  3. Development and tests of molybdenum armored copper components for MITICA ion source

    NASA Astrophysics Data System (ADS)

    Pavei, Mauro; Böswirth, Bernd; Greuner, Henri; Marcuzzi, Diego; Rizzolo, Andrea; Valente, Matteo

    2016-02-01

    In order to prevent detrimental material erosion of components impinged by back-streaming positive D or H ions in the megavolt ITER injector and concept advancement beam source, a solution based on explosion bonding technique has been identified for producing a 1 mm thick molybdenum armour layer on copper substrate, compatible with ITER requirements. Prototypes have been recently manufactured and tested in the high heat flux test facility Garching Large Divertor Sample Test Facility (GLADIS) to check the capability of the molybdenum-copper interface to withstand several thermal shock cycles at high power density. This paper presents both the numerical fluid-dynamic analyses of the prototypes simulating the test conditions in GLADIS as well as the experimental results.

  4. Development and tests of molybdenum armored copper components for MITICA ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavei, Mauro, E-mail: mauro.pavei@igi.cnr.it; Marcuzzi, Diego; Rizzolo, Andrea

    2016-02-15

    In order to prevent detrimental material erosion of components impinged by back-streaming positive D or H ions in the megavolt ITER injector and concept advancement beam source, a solution based on explosion bonding technique has been identified for producing a 1 mm thick molybdenum armour layer on copper substrate, compatible with ITER requirements. Prototypes have been recently manufactured and tested in the high heat flux test facility Garching Large Divertor Sample Test Facility (GLADIS) to check the capability of the molybdenum-copper interface to withstand several thermal shock cycles at high power density. This paper presents both the numerical fluid-dynamic analysesmore » of the prototypes simulating the test conditions in GLADIS as well as the experimental results.« less

  5. Development and tests of molybdenum armored copper components for MITICA ion source.

    PubMed

    Pavei, Mauro; Böswirth, Bernd; Greuner, Henri; Marcuzzi, Diego; Rizzolo, Andrea; Valente, Matteo

    2016-02-01

    In order to prevent detrimental material erosion of components impinged by back-streaming positive D or H ions in the megavolt ITER injector and concept advancement beam source, a solution based on explosion bonding technique has been identified for producing a 1 mm thick molybdenum armour layer on copper substrate, compatible with ITER requirements. Prototypes have been recently manufactured and tested in the high heat flux test facility Garching Large Divertor Sample Test Facility (GLADIS) to check the capability of the molybdenum-copper interface to withstand several thermal shock cycles at high power density. This paper presents both the numerical fluid-dynamic analyses of the prototypes simulating the test conditions in GLADIS as well as the experimental results.

  6. DEVELOPMENT OF INTERATOMIC POTENTIALS IN TUNGSTEN-RHENIUM SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Setyawan, Wahyu; Nandipati, Giridhar; Kurtz, Richard J.

    2016-09-01

    Reference data are generated using the ab initio method to fit interatomic potentials for the W-Re system. The reference data include single phases of W and Re, strained structures, slabs, systems containing several concentrations of vacancies, systems containing various types of interstitial defects, melt structures, structures in the σ and χ phases, and structures containing several concentrations of solid solutions of Re in bcc W and W in hcp Re. Future work will start the fitting iterations.

  7. ’In situ’ Measurement of the Ratio of Aerosol Absorption to Extinction Coefficient.

    DTIC Science & Technology

    1980-08-01

    procedure for settling measurements was to obtain a reference (presmoke) level of stabilized power on both of the calorimeters indicated in figure 1...sizing measurements which might be appropriate and accurate for this application as also being investigated. 16 REFERENCES 1. Selby, J. E. A., and L...Projectiles," ECOM-5570, August 1975. 7. Duncan, Louis D., "An Improved Algorithm for the Iterated Minimal Information Solution for Remote Sounding of

  8. Export Control Requirements for Tritium Processing Design and R&D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollis, William Kirk; Maynard, Sarah-Jane Wadsworth

    This document will address requirements of export control associated with tritium plant design and processes. Los Alamos National Laboratory has been working in the area of tritium plant system design and research and development (R&D) since the early 1970’s at the Tritium Systems Test Assembly (TSTA). This work has continued to the current date with projects associated with the ITER project and other Office of Science Fusion Energy Science (OS-FES) funded programs. ITER is currently the highest funding area for the DOE OS-FES. Although export control issues have been integrated into these projects in the past a general guidance documentmore » has not been available for reference in this area. To address concerns with currently funded tritium plant programs and assist future projects for FES, this document will identify the key reference documents and specific sections within related to tritium research. Guidance as to the application of these sections will be discussed with specific detail to publications and work with foreign nationals.« less

  9. Integrated modeling of temperature and rotation profiles in JET ITER-like wall discharges

    NASA Astrophysics Data System (ADS)

    Rafiq, T.; Kritz, A. H.; Kim, Hyun-Tae; Schuster, E.; Weiland, J.

    2017-10-01

    Simulations of 78 JET ITER-like wall D-D discharges and 2 D-T reference discharges are carried out using the TRANSP predictive integrated modeling code. The time evolved temperature and rotation profiles are computed utilizing the Multi-Mode anomalous transport model. The discharges involve a broad range of conditions including scans over gyroradius, collisionality, and values of q95. The D-T reference discharges are selected in anticipation of the D-T experimental campaign planned at JET in 2019. The simulated temperature and rotation profiles are compared with the corresponding experimental profiles in the radial range from the magnetic axis to the ρ = 0.9 flux surface. The comparison is quantified by calculating the RMS deviations and Offsets. Overall, good agreement is found between the profiles produced in the simulations and the experimental data. It is planned that the simulations obtained using the Multi-Mode model will be compared with the simulations using the TGLF model. Research supported in part by the US, DoE, Office of Sciences.

  10. Export Control Requirements for Tritium Processing Design and R&D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollis, William Kirk; Maynard, Sarah-Jane Wadsworth

    2015-10-30

    This document will address requirements of export control associated with tritium plant design and processes. Los Alamos National Laboratory has been working in the area of tritium plant system design and research and development (R&D) since the early 1970’s at the Tritium Systems Test Assembly (TSTA). This work has continued to the current date with projects associated with the ITER project and other Office of Science Fusion Energy Science (OS-FES) funded programs. ITER is currently the highest funding area for the DOE OS-FES. Although export control issues have been integrated into these projects in the past a general guidance documentmore » has not been available for reference in this area. To address concerns with currently funded tritium plant programs and assist future projects for FES, this document will identify the key reference documents and specific sections within related to tritium research. Guidance as to the application of these sections will be discussed with specific detail to publications and work with foreign nationals.« less

  11. Data from: Solving the Robot-World Hand-Eye(s) Calibration Problem with

    Science.gov Websites

    Iterative Methods | National Agricultural Library Skip to main content Home National Agricultural Library United States Department of Agriculture Ag Data Commons Beta Toggle navigation Datasets . License U.S. Public Domain Funding Source(s) National Science Foundation IOS-1339211 Agricultural Research

  12. Evaluating the effect of increased pitch, iterative reconstruction and dual source CT on dose reduction and image quality.

    PubMed

    Gariani, Joanna; Martin, Steve P; Botsikas, Diomidis; Becker, Christoph D; Montet, Xavier

    2018-06-14

    To compare radiation dose and image quality of thoracoabdominal scans obtained with a high-pitch protocol (pitch 3.2) and iterative reconstruction (Sinogram Affirmed Iterative Reconstruction) in comparison to standard pitch reconstructed with filtered back projection (FBP) using dual source CT. 114 CT scans (Somatom Definition Flash, Siemens Healthineers, Erlangen, Germany), 39 thoracic scans, 54 thoracoabdominal scans and 21 abdominal scans were performed. Analysis of three protocols was undertaken; pitch of 1 reconstructed with FBP, pitch of 3.2 reconstructed with SAFIRE, pitch of 3.2 with stellar detectors reconstructed with SAFIRE. Objective and subjective image analysis were performed. Dose differences of the protocols used were compared. Dose was reduced when comparing scans with a pitch of 1 reconstructed with FBP to high-pitch scans with a pitch of 3.2 reconstructed with SAFIRE with a reduction of volume CT dose index of 75% for thoracic scans, 64% for thoracoabdominal scans and 67% for abdominal scans. There was a further reduction after the implementation of stellar detectors reflected in a reduction of 36% of the dose-length product for thoracic scans. This was not at the detriment of image quality, contrast-to-noise ratio, signal-to-noise ratio and the qualitative image analysis revealed a superior image quality in the high-pitch protocols. The combination of a high pitch protocol with iterative reconstruction allows significant dose reduction in routine chest and abdominal scans whilst maintaining or improving diagnostic image quality, with a further reduction in thoracic scans with stellar detectors. Advances in knowledge: High pitch imaging with iterative reconstruction is a tool that can be used to reduce dose without sacrificing image quality.

  13. Submillisievert coronary CT angiography with adaptive prospective ECG-triggered sequence acquisition and iterative reconstruction in patients with high heart rate on the dual-source CT.

    PubMed

    Tang, Pei-Hua; Du, Ben-Jun; Fang, Xiang-Ming; Hu, Xiao-Yun; Qian, Ping-Yan; Gao, Quan-Sheng

    2016-11-22

    To assess the application value of submillisievert coronary CT angiography (CCTA) in patients with a high heart rate (HR) acquired with adaptive prospective ECG-triggered sequence acquisition and iterative reconstruction on the secondary generation dual-source CT. A total of 120 consecutive high-HR patients suspected with coronary artery disease underwent CCTA and invasive coronary angiography (ICA) within two weeks. Patients were randomly assigned into three groups: group A (n = 40), where the patients underwent retrospectively ECG-triggered acquisition CCTA at 100 kVp; group B (n = 40), where the patients received adaptive prospective ECG-triggered sequence acquisition at 100 kVp; and group C (n = 40), where the patients performed adaptive prospective ECG-triggered sequence acquisition at 80 kVp with iterative reconstruction. The mean CT values, signal noise ratios (SNR) and contrast noise ratios (CNR) in the ascending aorta and coronary arteries of the three groups were measured and compared. The image quality and radiation dose among the three groups were compared. The consistency of displaying the coronary stenosis of each group was assessed compared with the results of ICA as the gold standard. There was no significant difference in gender, age and body mass index (BMI) (all P > 0.05). The mean attenuations, SNRs and CNRs in the ascending aorta and coronary artery were not significantly different between group A and group B (P > 0.05). The mean attenuations of group C were significantly higher than group A and group B (P < 0.01), but the image noise and CNR were significantly lower in group C (P < 0.01). The number of appreciable segments among the three groups was not significantly different on a per-segment and per-vessel basis (P > 0.05). The subjective image quality among the three groups was not significantly different (P > 0.05). With the ICA result as a reference standard, there was good consistency in the evaluation of the coronary stenosis degree between CCTA and ICA (r > 0.75), as well as in the assessment of the coronary stenosis rate using the Bland- Altman analysis. The mean radiation dose in group B was half of that in group A. Moreover, the mean radiation dose in group C was less than one sixth of that in group A and less than 1 mSv (0.7±0.2 mSv). For patients with high HR, adaptive prospective ECG-triggered sequence acquisition on the FLASH dual-source CT results in equal image quality and half of the radiation dose reduction compared with retrospectively ECG-triggered spiral acquisition at the same tube voltage (100 kVp) and same R-R interval of exposure. In addition, adaptive prospective ECG-triggered sequence acquisition combined with low tube voltage and iterative reconstruction can further reduce the radiation dose to the submillisievert level without compromising image quality and the accuracy of assessing the coronary stenosis degree, and can be popularized as a routine technique.

  14. Improvements in surface singularity analysis and design methods. [applicable to airfoils

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.

    1979-01-01

    The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.

  15. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  16. Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions

    DOE PAGES

    Burke, Timothy P.; Kiedrowski, Brian C.

    2017-12-11

    Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less

  17. Investigation of intrinsic toroidal rotation scaling in KSTAR

    NASA Astrophysics Data System (ADS)

    Yoo, J. W.; Lee, S. G.; Ko, S. H.; Seol, J.; Lee, H. H.; Kim, J. H.

    2017-07-01

    The behaviors of an intrinsic toroidal rotation without any external momentum sources are investigated in KSTAR. In these experiments, pure ohmic discharges with a wide range of plasma parameters are carefully selected and analyzed to speculate an unrevealed origin of toroidal rotation excluding any unnecessary heating sources, magnetic perturbations, and strong magneto-hydrodynamic activities. The measured core toroidal rotation in KSTAR is mostly in the counter-current direction and its magnitude strongly depends on the ion temperature divided by plasma current (Ti/IP). Especially the core toroidal rotation in the steady-state is well fitted by Ti/IP scaling with a slope of ˜-23, and the possible explanation of the scaling is compared with various candidates. As a result, the calculated offset rotation could not explain the measured core toroidal rotation since KSTAR has an extremely low intrinsic error field. For the stability conditions for ion and electron turbulences, it is hard to determine a dominant turbulence mode in this study. In addition, the intrinsic toroidal rotation level in ITER is estimated based on the KSTAR scaling since the intrinsic rotation plays an important role in stabilizing resistive wall modes for future reference.

  18. Children's eyewitness memory: repeating post-event misinformation reduces the distinctiveness of a witnessed event.

    PubMed

    Bright-Paul, Alexandra; Jarrold, Christopher

    2012-01-01

    Children may incorporate misinformation into reports of witnessed events, particularly if the misinformation is repeated. One explanation is that the misinformation trace is strengthened by repetition. Alternatively, repeating misinformation may reduce the discriminability between event and misinformation sources, increasing interference between them. We tested trace strength and distinctiveness accounts by showing 5- and 6-year-olds an event and then presenting either the "same" or "varying" items of post-event misinformation across three iterations. Performance was compared to a baseline in which misinformation was presented once. Repeating the same misinformation increased suggestibility when misinformation was erroneously attributed to both event and misinformation sources, supporting a trace strength interpretation. However, suggestibility measured by attributing misinformation solely to the event, was lower when misinformation was presented repeatedly rather than once. In contrast, identification of the correct source of the event was less likely if the misinformation was repeated, whether the same or different across iterations. Thus a reduction in the distinctiveness of sources disrupted memory for the event source. Moreover, there was strong association between memory for the event and a measure of distinctiveness of sources, which takes into account both the number of confusable source and their apparent temporal spacing from the point of retrieval.

  19. Ensemble Kalman Filter versus Ensemble Smoother for Data Assimilation in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Li, L.; Cao, Z.; Zhou, H.

    2017-12-01

    Groundwater modeling calls for an effective and robust integrating method to fill the gap between the model and data. The Ensemble Kalman Filter (EnKF), a real-time data assimilation method, has been increasingly applied in multiple disciplines such as petroleum engineering and hydrogeology. In this approach, the groundwater models are sequentially updated using measured data such as hydraulic head and concentration data. As an alternative to the EnKF, the Ensemble Smoother (ES) was proposed with updating models using all the data together, and therefore needs a much less computational cost. To further improve the performance of the ES, an iterative ES was proposed for continuously updating the models by assimilating measurements together. In this work, we compare the performance of the EnKF, the ES and the iterative ES using a synthetic example in groundwater modeling. The hydraulic head data modeled on the basis of the reference conductivity field are utilized to inversely estimate conductivities at un-sampled locations. Results are evaluated in terms of the characterization of conductivity and groundwater flow and solute transport predictions. It is concluded that: (1) the iterative ES could achieve a comparable result with the EnKF, but needs a less computational cost; (2) the iterative ES has the better performance than the ES through continuously updating. These findings suggest that the iterative ES should be paid much more attention for data assimilation in groundwater modeling.

  20. Broad-band efficiency calibration of ITER bolometer prototypes using Pt absorbers on SiN membranes.

    PubMed

    Meister, H; Willmeroth, M; Zhang, D; Gottwald, A; Krumrey, M; Scholze, F

    2013-12-01

    The energy resolved efficiency of two bolometer detector prototypes for ITER with 4 channels each and absorber thicknesses of 4.5 μm and 12.5 μm, respectively, has been calibrated in a broad spectral range from 1.46 eV up to 25 keV. The calibration in the energy range above 3 eV was performed against previously calibrated silicon photodiodes using monochromatized synchrotron radiation provided by five different beamlines of Physikalische Technische Bundesanstalt at the electron storage rings BESSY II and Metrology Light Source in Berlin. For the measurements in the visible range, a setup was realised using monochromatized halogen lamp radiation and a calibrated laser power meter as reference. The measurements clearly demonstrate that the efficiency of the bolometer prototype detectors in the range from 50 eV up to ≈6 keV is close to unity; at a photon energy of 20 keV the bolometer with the thick absorber detects 80% of the photons, the one with the thin absorber about 50%. This indicates that the detectors will be well capable of measuring the plasma radiation expected from the standard ITER scenario. However, a minimum absorber thickness will be required for the high temperatures in the central plasma. At 11.56 keV, the sharp Pt-L3 absorption edge allowed to cross-check the absorber thickness by fitting the measured efficiency to the theoretically expected absorption of X-rays in a homogeneous Pt-layer. Furthermore, below 50 eV the efficiency first follows the losses due to reflectance expected for Pt, but below 10 eV it is reduced further by a factor of 2 for the thick absorber and a factor of 4 for the thin absorber. Most probably, the different histories in production, storage, and operation led to varying surface conditions and additional loss channels.

  1. Optoelectronic Inner-Product Neural Associative Memory

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1993-01-01

    Optoelectronic apparatus acts as artificial neural network performing associative recall of binary images. Recall process is iterative one involving optical computation of inner products between binary input vector and one or more reference binary vectors in memory. Inner-product method requires far less memory space than matrix-vector method.

  2. Ontology Matching Across Domains

    DTIC Science & Technology

    2010-05-01

    matching include GMO [1], Anchor-Prompt [2], and Similarity Flooding [3]. GMO is an iterative structural matcher, which uses RDF bipartite graphs to...AFRL under contract# FA8750-09-C-0058. References [1] Hu, W., Jian, N., Qu, Y., Wang, Y., “ GMO : a graph matching for ontologies”, in: Proceedings of

  3. Group iterative methods for the solution of two-dimensional time-fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Balasim, Alla Tareq; Ali, Norhashidah Hj. Mohd.

    2016-06-01

    Variety of problems in science and engineering may be described by fractional partial differential equations (FPDE) in relation to space and/or time fractional derivatives. The difference between time fractional diffusion equations and standard diffusion equations lies primarily in the time derivative. Over the last few years, iterative schemes derived from the rotated finite difference approximation have been proven to work well in solving standard diffusion equations. However, its application on time fractional diffusion counterpart is still yet to be investigated. In this paper, we will present a preliminary study on the formulation and analysis of new explicit group iterative methods in solving a two-dimensional time fractional diffusion equation. These methods were derived from the standard and rotated Crank-Nicolson difference approximation formula. Several numerical experiments were conducted to show the efficiency of the developed schemes in terms of CPU time and iteration number. At the request of all authors of the paper an updated version of this article was published on 7 July 2016. The original version supplied to AIP Publishing contained an error in Table 1 and References 15 and 16 were incomplete. These errors have been corrected in the updated and republished article.

  4. A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics

    NASA Astrophysics Data System (ADS)

    Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger

    2017-09-01

    Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.

  5. Iteration of ultrasound aberration correction methods

    NASA Astrophysics Data System (ADS)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  6. Adjoint tomography of Europe

    NASA Astrophysics Data System (ADS)

    Zhu, H.; Bozdag, E.; Peter, D. B.; Tromp, J.

    2010-12-01

    We use spectral-element and adjoint methods to image crustal and upper mantle heterogeneity in Europe. The study area involves the convergent boundaries of the Eurasian, African and Arabian plates and the divergent boundary between the Eurasian and North American plates, making the tectonic structure of this region complex. Our goal is to iteratively fit observed seismograms and improve crustal and upper mantle images by taking advantage of 3D forward and inverse modeling techniques. We use data from 200 earthquakes with magnitudes between 5 and 6 recorded by 262 stations provided by ORFEUS. Crustal model Crust2.0 combined with mantle model S362ANI comprise the initial 3D model. Before the iterative adjoint inversion, we determine earthquake source parameters in the initial 3D model by using 3D Green functions and their Fréchet derivatives with respect to the source parameters (i.e., centroid moment tensor and location). The updated catalog is used in the subsequent structural inversion. Since we concentrate on upper mantle structures which involve anisotropy, transversely isotropic (frequency-dependent) traveltime sensitivity kernels are used in the iterative inversion. Taking advantage of the adjoint method, we use as many measurements as can obtain based on comparisons between observed and synthetic seismograms. FLEXWIN (Maggi et al., 2009) is used to automatically select measurement windows which are analyzed based on a multitaper technique. The bandpass ranges from 15 second to 150 second. Long-period surface waves and short-period body waves are combined in source relocations and structural inversions. A statistical assessments of traveltime anomalies and logarithmic waveform differences is used to characterize the inverted sources and structure.

  7. Standards for reporting qualitative research: a synthesis of recommendations.

    PubMed

    O'Brien, Bridget C; Harris, Ilene B; Beckman, Thomas J; Reed, Darcy A; Cook, David A

    2014-09-01

    Standards for reporting exist for many types of quantitative research, but currently none exist for the broad spectrum of qualitative research. The purpose of the present study was to formulate and define standards for reporting qualitative research while preserving the requisite flexibility to accommodate various paradigms, approaches, and methods. The authors identified guidelines, reporting standards, and critical appraisal criteria for qualitative research by searching PubMed, Web of Science, and Google through July 2013; reviewing the reference lists of retrieved sources; and contacting experts. Specifically, two authors reviewed a sample of sources to generate an initial set of items that were potentially important in reporting qualitative research. Through an iterative process of reviewing sources, modifying the set of items, and coding all sources for items, the authors prepared a near-final list of items and descriptions and sent this list to five external reviewers for feedback. The final items and descriptions included in the reporting standards reflect this feedback. The Standards for Reporting Qualitative Research (SRQR) consists of 21 items. The authors define and explain key elements of each item and provide examples from recently published articles to illustrate ways in which the standards can be met. The SRQR aims to improve the transparency of all aspects of qualitative research by providing clear standards for reporting qualitative research. These standards will assist authors during manuscript preparation, editors and reviewers in evaluating a manuscript for potential publication, and readers when critically appraising, applying, and synthesizing study findings.

  8. Accurate Micro-Tool Manufacturing by Iterative Pulsed-Laser Ablation

    NASA Astrophysics Data System (ADS)

    Warhanek, Maximilian; Mayr, Josef; Dörig, Christian; Wegener, Konrad

    2017-12-01

    Iterative processing solutions, including multiple cycles of material removal and measurement, are capable of achieving higher geometric accuracy by compensating for most deviations manifesting directly on the workpiece. Remaining error sources are the measurement uncertainty and the repeatability of the material-removal process including clamping errors. Due to the lack of processing forces, process fluids and wear, pulsed-laser ablation has proven high repeatability and can be realized directly on a measuring machine. This work takes advantage of this possibility by implementing an iterative, laser-based correction process for profile deviations registered directly on an optical measurement machine. This way efficient iterative processing is enabled, which is precise, applicable for all tool materials including diamond and eliminates clamping errors. The concept is proven by a prototypical implementation on an industrial tool measurement machine and a nanosecond fibre laser. A number of measurements are performed on both the machine and the processed workpieces. Results show production deviations within 2 μm diameter tolerance.

  9. Data-driven Green's function retrieval and application to imaging with multidimensional deconvolution

    NASA Astrophysics Data System (ADS)

    Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel

    2014-01-01

    An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.

  10. Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks

    PubMed Central

    Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng

    2017-01-01

    In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement. PMID:28677636

  11. Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks.

    PubMed

    Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng

    2017-07-04

    In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement.

  12. Achievements in the development of the Water Cooled Solid Breeder Test Blanket Module of Japan to the milestones for installation in ITER

    NASA Astrophysics Data System (ADS)

    Tsuru, Daigo; Tanigawa, Hisashi; Hirose, Takanori; Mohri, Kensuke; Seki, Yohji; Enoeda, Mikio; Ezato, Koichiro; Suzuki, Satoshi; Nishi, Hiroshi; Akiba, Masato

    2009-06-01

    As the primary candidate of ITER Test Blanket Module (TBM) to be tested under the leadership of Japan, a water cooled solid breeder (WCSB) TBM is being developed. This paper shows the recent achievements towards the milestones of ITER TBMs prior to the installation, which consist of design integration in ITER, module qualification and safety assessment. With respect to the design integration, targeting the detailed design final report in 2012, structure designs of the WCSB TBM and the interfacing components (common frame and backside shielding) that are placed in a test port of ITER and the layout of the cooling system are presented. As for the module qualification, a real-scale first wall mock-up fabricated by using the hot isostatic pressing method by structural material of reduced activation martensitic ferritic steel, F82H, and flow and irradiation test of the mock-up are presented. As for safety milestones, the contents of the preliminary safety report in 2008 consisting of source term identification, failure mode and effect analysis (FMEA) and identification of postulated initiating events (PIEs) and safety analyses are presented.

  13. Response to Intervention: "Lore v. Law"

    ERIC Educational Resources Information Center

    Zirkel, Perry A.

    2018-01-01

    The legal dimension of response to intervention (RTI) has been the subject of considerable professional confusion. This brief article addresses the issue in three parts. The first part provides an update of a previous iteration that compared 12 common conceptions, referred to here as the "lore," with an objective synthesis of the…

  14. 39 CFR 3050.1 - Definitions applicable to this part.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... was applied by the Commission in its most recent Annual Compliance Determination unless a different analytical principle subsequently was accepted by the Commission in a final rule. (b) Accepted quantification technique refers to a quantification technique that was applied in the most recent iteration of the periodic...

  15. 39 CFR 3050.1 - Definitions applicable to this part.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... was applied by the Commission in its most recent Annual Compliance Determination unless a different analytical principle subsequently was accepted by the Commission in a final rule. (b) Accepted quantification technique refers to a quantification technique that was applied in the most recent iteration of the periodic...

  16. Receiver function stacks: initial steps for seismic imaging of Cotopaxi volcano, Ecuador

    NASA Astrophysics Data System (ADS)

    Bishop, J. W.; Lees, J. M.; Ruiz, M. C.

    2017-12-01

    Cotopaxi volcano is a large, andesitic stratovolcano located within 50 km of the the Ecuadorean capital of Quito. Cotopaxi most recently erupted for the first time in 73 years during August 2015. This eruptive cycle (VEI = 1) featured phreatic explosions and ejection of an ash column 9 km above the volcano edifice. Following this event, ash covered approximately 500 km2 of the surrounding area. Analysis of Multi-GAS data suggests that this eruption was fed from a shallow source. However, stratigraphic evidence surveying the last 800 years of Cotopaxi's activity suggests that there may be a deep magmatic source. To establish a geophysical framework for Cotopaxi's activity, receiver functions were calculated from well recorded earthquakes detected from April 2015 to December 2015 at 9 permanent broadband seismic stations around the volcano. These events were located, and phase arrivals were manually picked. Radial teleseismic receiver functions were then calculated using an iterative deconvolution technique with a Gaussian width of 2.5. A maximum of 200 iterations was allowed in each deconvolution. Iterations were stopped when either the maximum iteration number was reached or the percent change fell beneath a pre-determined tolerance. Receiver functions were then visually inspected for anomalous pulses before the initial P arrival or later peaks larger than the initial P-wave correlated pulse, which were also discarded. Using this data, initial crustal thickness and slab depth estimates beneath the volcano were obtained. Estimates of crustal Vp/Vs ratio for the region were also calculated.

  17. High-bandwidth and flexible tracking control for precision motion with application to a piezo nanopositioner.

    PubMed

    Feng, Zhao; Ling, Jie; Ming, Min; Xiao, Xiao-Hui

    2017-08-01

    For precision motion, high-bandwidth and flexible tracking are the two important issues for significant performance improvement. Iterative learning control (ILC) is an effective feedforward control method only for systems that operate strictly repetitively. Although projection ILC can track varying references, the performance is still limited by the fixed-bandwidth Q-filter, especially for triangular waves tracking commonly used in a piezo nanopositioner. In this paper, a wavelet transform-based linear time-varying (LTV) Q-filter design for projection ILC is proposed to compensate high-frequency errors and improve the ability to tracking varying references simultaneously. The LVT Q-filter is designed based on the modulus maximum of wavelet detail coefficients calculated by wavelet transform to determine the high-frequency locations of each iteration with the advantages of avoiding cross-terms and segmenting manually. The proposed approach was verified on a piezo nanopositioner. Experimental results indicate that the proposed approach can locate the high-frequency regions accurately and achieve the best performance under varying references compared with traditional frequency-domain and projection ILC with a fixed-bandwidth Q-filter, which validates that through implementing the LTV filter on projection ILC, high-bandwidth and flexible tracking can be achieved simultaneously by the proposed approach.

  18. External field characterization using CHAMP satellite data for induction studies

    NASA Astrophysics Data System (ADS)

    Kunagu, Praveen; Chandrasekhar, E.

    2013-06-01

    Knowledge of external inducing source field morphology is essential for precise estimation of electromagnetic (EM) induction response. A better characterization of the external source field of magnetospheric origin can be achieved by decomposing it into outer and inner magnetospheric contributions, which are best represented in Geocentric Solar Magnetospheric (GSM) and Solar Magnetic (SM) reference frames, respectively. Thus we propose a spherical harmonic (SH) model to estimate the outer magnetospheric contribution, following the iterative reweighted least squares approach, using the vector magnetic data of the CHAMP satellite. The data covers almost a complete solar cycle from July 2001 to September 2010, spanning 54,474 orbits. The SH model, developed using orbit-averaged vector magnetic data, reveals the existence of a stable outer magnetospheric contribution of about 7.39 nT. This stable field was removed from the CHAMP data after transforming to SM frame. The residual field in the SM frame acts as a primary source for induction in the Earth. The analysis of this time-series using wavelet transformation showed a dominant 27-day periodicity of the geomagnetic field. Therefore, we calculated the inductive EM C-response function in a least squares sense considering the 27-day period variation as the inducing signal. From the estimated C-response, we have determined that the global depth to the perfect substitute conductor is about 1132 km and its conductivity is around 1.05 S/m.

  19. Pump-dump iterative squeezing of vibrational wave packets.

    PubMed

    Chang, Bo Y; Sola, Ignacio R

    2005-12-22

    The free motion of a nonstationary vibrational wave packet in an electronic potential is a source of interesting quantum properties. In this work we propose an iterative scheme that allows continuous stretching and squeezing of a wave packet in the ground or in an excited electronic state, by switching the wave function between both potentials with pi pulses at certain times. Using a simple model of displaced harmonic oscillators and delta pulses, we derive the analytical solution and the conditions for its possible implementation and optimization in different molecules and electronic states. We show that the main constraining parameter is the pulse bandwidth. Although in principle the degree of squeezing (or stretching) is not bounded, the physical resources increase quadratically with the number of iterations, while the achieved squeezing only increases linearly.

  20. A photoelastic-modulator-based motional Stark effect polarimeter for ITER that is insensitive to polarized broadband background reflections.

    PubMed

    Thorman, A; Michael, C; De Bock, M; Howard, J

    2016-07-01

    A motional Stark effect polarimeter insensitive to polarized broadband light is proposed. Partially polarized background light is anticipated to be a significant source of systematic error for the ITER polarimeter. The proposed polarimeter is based on the standard dual photoelastic modulator approach, but with the introduction of a birefringent delay plate, it generates a sinusoidal spectral filter instead of the usual narrowband filter. The period of the filter is chosen to match the spacing of the orthogonally polarized Stark effect components, thereby increasing the effective signal level, but resulting in the destructive interference of the broadband polarized light. The theoretical response of the system to an ITER like spectrum is calculated and the broadband polarization tolerance is verified experimentally.

  1. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1991-01-01

    Run-time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run-time, wavefronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing, and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run-time reordering of loop indexes can have a significant impact on performance.

  2. Advanced Data Acquisition System Implementation for the ITER Neutron Diagnostic Use Case Using EPICS and FlexRIO Technology on a PXIe Platform

    NASA Astrophysics Data System (ADS)

    Sanz, D.; Ruiz, M.; Castro, R.; Vega, J.; Afif, M.; Monroe, M.; Simrock, S.; Debelle, T.; Marawar, R.; Glass, B.

    2016-04-01

    To aid in assessing the functional performance of ITER, Fission Chambers (FC) based on the neutron diagnostic use case deliver timestamped measurements of neutron source strength and fusion power. To demonstrate the Plant System Instrumentation & Control (I&C) required for such a system, ITER Organization (IO) has developed a neutron diagnostics use case that fully complies with guidelines presented in the Plant Control Design Handbook (PCDH). The implementation presented in this paper has been developed on the PXI Express (PXIe) platform using products from the ITER catalog of standard I&C hardware for fast controllers. Using FlexRIO technology, detector signals are acquired at 125 MS/s, while filtering, decimation, and three methods of neutron counting are performed in real-time via the onboard Field Programmable Gate Array (FPGA). Measurement results are reported every 1 ms through Experimental Physics and Industrial Control System (EPICS) Channel Access (CA), with real-time timestamps derived from the ITER Timing Communication Network (TCN) based on IEEE 1588-2008. Furthermore, in accordance with ITER specifications for CODAC Core System (CCS) application development, the software responsible for the management, configuration, and monitoring of system devices has been developed in compliance with a new EPICS module called Nominal Device Support (NDS) and RIO/FlexRIO design methodology.

  3. Heating and current drive requirements towards steady state operation in ITER

    NASA Astrophysics Data System (ADS)

    Poli, F. M.; Bonoli, P. T.; Kessel, C. E.; Batchelor, D. B.; Gorelenkova, M.; Harvey, B.; Petrov, Y.

    2014-02-01

    Steady state scenarios envisaged for ITER aim at optimizing the bootstrap current, while maintaining sufficient confinement and stability to provide the necessary fusion yield. Non-inductive scenarios will need to operate with Internal Transport Barriers (ITBs) in order to reach adequate fusion gain at typical currents of 9 MA. However, the large pressure gradients associated with ITBs in regions of weak or negative magnetic shear can be conducive to ideal MHD instabilities, reducing the no-wall limit. The E × B flow shear from toroidal plasma rotation is expected to be low in ITER, with a major role in the ITB dynamics being played by magnetic geometry. Combinations of H/CD sources that maintain weakly reversed magnetic shear profiles throughout the discharge are the focus of this work. Time-dependent transport simulations indicate that, with a trade-off of the EC equatorial and upper launcher, the formation and sustainment of quasi-steady state ITBs could be demonstrated in ITER with the baseline heating configuration. However, with proper constraints from peeling-ballooning theory on the pedestal width and height, the fusion gain and the maximum non-inductive current are below the ITER target. Upgrades of the heating and current drive system in ITER, like the use of Lower Hybrid current drive, could overcome these limitations, sustaining higher non-inductive current and confinement, more expanded ITBs which are ideal MHD stable.

  4. Adjoint Inversion for Extended Earthquake Source Kinematics From Very Dense Strong Motion Data

    NASA Astrophysics Data System (ADS)

    Ampuero, J. P.; Somala, S.; Lapusta, N.

    2010-12-01

    Addressing key open questions about earthquake dynamics requires a radical improvement of the robustness and resolution of seismic observations of large earthquakes. Proposals for a new generation of earthquake observation systems include the deployment of “community seismic networks” of low-cost accelerometers in urban areas and the extraction of strong ground motions from high-rate optical images of the Earth's surface recorded by a large space telescope in geostationary orbit. Both systems could deliver strong motion data with a spatial density orders of magnitude higher than current seismic networks. In particular, a “space seismometer” could sample the seismic wave field at a spatio-temporal resolution of 100 m, 1 Hz over areas several 100 km wide with an amplitude resolution of few cm/s in ground velocity. The amount of data to process would be immensely larger than what current extended source inversion algorithms can handle, which hampers the quantitative assessment of the cost-benefit trade-offs that can guide the practical design of the proposed earthquake observation systems. We report here on the development of a scalable source imaging technique based on iterative adjoint inversion and its application to the proof-of-concept of a space seismometer. We generated synthetic ground motions for M7 earthquake rupture scenarios based on dynamic rupture simulations on a vertical strike-slip fault embedded in an elastic half-space. A range of scenarios include increasing levels of complexity and interesting features such as supershear rupture speed. The resulting ground shaking is then processed accordingly to what would be captured by an optical satellite. Based on the resulting data, we perform source inversion by an adjoint/time-reversal method. The gradient of a cost function quantifying the waveform misfit between data and synthetics is efficiently obtained by applying the time-reversed ground velocity residuals as surface force sources, back-propagating onto the locked fault plane through a seismic wave simulation and recording the fault shear stress, which is the adjoint field of the fault slip-rate. Restricting the procedure to a single iteration is known as imaging. The source reconstructed by imaging reproduces the original forward model quite well in the shallow part of the fault. However, the deeper part of the earthquake source is not well reproduced, due to the lack of data on the side and bottom boundaries of our computational domain. To resolve this issue, we are implementing the complete iterative procedure and we will report on the convergence aspects of the adjoint iterations. Our current work is also directed towards addressing the lack of data on other boundaries of our domain and improving the source reconstruction by including teleseismic data for those boundaries and non-negativity constraints on the dominant slip-rate component.

  5. Non-ideal operating conditions of the ion source prototype for the ITER neutral beam injector due to thermal deformation of the support structure.

    PubMed

    Sartori, E; Pavei, M; Marcuzzi, D; Zaccaria, P

    2014-02-01

    The beam formation and acceleration of the ITER neutral beam injector will be studied in the full-scale ion source, Source for Production of Ions of Deuterium Extracted from a RF plasma (SPIDER). It will be able to sustain 40 A deuterium ion beam during 1-h pulses. The operating conditions of its multi-aperture electrodes will diverge from ideality, as a consequence of inhomogeneous heating and thermally induced deformations in the support structure of the extraction and acceleration grids, which operate at different temperatures. Meeting the requirements on the aperture alignment and distance between the grids with such a large number of apertures (1280) and the huge support structures constitute a challenge. Examination of the structure thermal deformation in transient and steady conditions has been carried out, evaluating their effect on the beam performance: the paper describes the analyses and the solutions proposed to mitigate detrimental effects.

  6. Physics design of the injector source for ITER neutral beam injector (invited).

    PubMed

    Antoni, V; Agostinetti, P; Aprile, D; Cavenago, M; Chitarin, G; Fonnesu, N; Marconato, N; Pilan, N; Sartori, E; Serianni, G; Veltri, P

    2014-02-01

    Two Neutral Beam Injectors (NBI) are foreseen to provide a substantial fraction of the heating power necessary to ignite thermonuclear fusion reactions in ITER. The development of the NBI system at unprecedented parameters (40 A of negative ion current accelerated up to 1 MV) requires the realization of a full scale prototype, to be tested and optimized at the Test Facility under construction in Padova (Italy). The beam source is the key component of the system and the design of the multi-grid accelerator is the goal of a multi-national collaborative effort. In particular, beam steering is a challenging aspect, being a tradeoff between requirements of the optics and real grids with finite thickness and thermo-mechanical constraints due to the cooling needs and the presence of permanent magnets. In the paper, a review of the accelerator physics and an overview of the whole R&D physics program aimed to the development of the injector source are presented.

  7. ITER Plasma at Electron Cyclotron Frequency Domain: Stimulated Raman Scattering off Gould-Trivelpiece Modes and Generation of Suprathermal Electrons and Energetic Ions

    NASA Astrophysics Data System (ADS)

    Stefan, V. Alexander

    2011-04-01

    Stimulated Raman scattering in the electron cyclotron frequency range of the X-Mode and O-Mode driver with the ITER plasma leads to the ``tail heating'' via the generation of suprathermal electrons and energetic ions. The scattering off Trivelpiece-Gould (T-G) modes is studied for the gyrotron frequency of 170GHz; X-Mode and O-Mode power of 24 MW CW; on-axis B-field of 10T. The synergy between the two-plasmon decay and Raman scattering is analyzed in reference to the bulk plasma heating. Supported in part by Nikola TESLA Labs, La Jolla, CA

  8. Executing SPARQL Queries over the Web of Linked Data

    NASA Astrophysics Data System (ADS)

    Hartig, Olaf; Bizer, Christian; Freytag, Johann-Christoph

    The Web of Linked Data forms a single, globally distributed dataspace. Due to the openness of this dataspace, it is not possible to know in advance all data sources that might be relevant for query answering. This openness poses a new challenge that is not addressed by traditional research on federated query processing. In this paper we present an approach to execute SPARQL queries over the Web of Linked Data. The main idea of our approach is to discover data that might be relevant for answering a query during the query execution itself. This discovery is driven by following RDF links between data sources based on URIs in the query and in partial results. The URIs are resolved over the HTTP protocol into RDF data which is continuously added to the queried dataset. This paper describes concepts and algorithms to implement our approach using an iterator-based pipeline. We introduce a formalization of the pipelining approach and show that classical iterators may cause blocking due to the latency of HTTP requests. To avoid blocking, we propose an extension of the iterator paradigm. The evaluation of our approach shows its strengths as well as the still existing challenges.

  9. Modeling and simulation of a beam emission spectroscopy diagnostic for the ITER prototype neutral beam injector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbisan, M., E-mail: marco.barbisan@igi.cnr.it; Zaniol, B.; Pasqualotto, R.

    2014-11-15

    A test facility for the development of the neutral beam injection system for ITER is under construction at Consorzio RFX. It will host two experiments: SPIDER, a 100 keV H{sup −}/D{sup −} ion RF source, and MITICA, a prototype of the full performance ITER injector (1 MV, 17 MW beam). A set of diagnostics will monitor the operation and allow to optimize the performance of the two prototypes. In particular, beam emission spectroscopy will measure the uniformity and the divergence of the fast particles beam exiting the ion source and travelling through the beam line components. This type of measurementmore » is based on the collection of the H{sub α}/D{sub α} emission resulting from the interaction of the energetic particles with the background gas. A numerical model has been developed to simulate the spectrum of the collected emissions in order to design this diagnostic and to study its performance. The paper describes the model at the base of the simulations and presents the modeled H{sub α} spectra in the case of MITICA experiment.« less

  10. Sputtering effects on mirrors made of different tungsten grades

    NASA Astrophysics Data System (ADS)

    Voitsenya, V. S.; Ogorodnikova, O. V.; Bardamid, A. F.; Bondarenko, V. N.; Konovalov, V. G.; Lytvyn, P. M.; Marot, L.; Ryzhkov, I. V.; Shtan', A. F.; Skoryk, O. O.; Solodovchenko, S. I.

    2018-03-01

    Because tungsten (W) is used in present fusion devices and it is a reference material for ITER divertor and possible plasma-facing material for DEMO, we strive to understand the response of different W grades to ion bombardment. In this study, we investigated the behavior of mirrors made of four polycrystalline W grades under long-term ion sputtering. Argon (Ar) and deuterium (D) ions extracted from a plasma were used to investigate the effect of projectile mass on surface modification. Depending on the ion fluence, the reflectance measured at normal incidence was very different for different W grades. The lowest degradation rate of the reflectance was measured for the mirror made of recrystallized W. The highest degradation rate was found for one of the ITER-grade W samples. Pre-irradiation of a mirror with 20-MeV W6+ ions, as simulation of neutron irradiation in ITER, had no noticeable influence on reflectance degradation under sputtering with either Ar or D ions.

  11. Solution of an eigenvalue problem for the Laplace operator on a spherical surface. M.S. Thesis - Maryland Univ.

    NASA Technical Reports Server (NTRS)

    Walden, H.

    1974-01-01

    Methods for obtaining approximate solutions for the fundamental eigenvalue of the Laplace-Beltrami operator (also referred to as the membrane eigenvalue problem for the vibration equation) on the unit spherical surface are developed. Two specific types of spherical surface domains are considered: (1) the interior of a spherical triangle, i.e., the region bounded by arcs of three great circles, and (2) the exterior of a great circle arc extending for less than pi radians on the sphere (a spherical surface with a slit). In both cases, zero boundary conditions are imposed. In order to solve the resulting second-order elliptic partial differential equations in two independent variables, a finite difference approximation is derived. The symmetric (generally five-point) finite difference equations that develop are written in matrix form and then solved by the iterative method of point successive overrelaxation. Upon convergence of this iterative method, the fundamental eigenvalue is approximated by iteration utilizing the power method as applied to the finite Rayleigh quotient.

  12. Imaging complex objects using learning tomography

    NASA Astrophysics Data System (ADS)

    Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri

    2018-02-01

    Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.

  13. The MHOST finite element program: 3-D inelastic analysis methods for hot section components. Volume 1: Theoretical manual

    NASA Technical Reports Server (NTRS)

    Nakazawa, Shohei

    1991-01-01

    Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.

  14. ReprDB and panDB: minimalist databases with maximal microbial representation.

    PubMed

    Zhou, Wei; Gay, Nicole; Oh, Julia

    2018-01-18

    Profiling of shotgun metagenomic samples is hindered by a lack of unified microbial reference genome databases that (i) assemble genomic information from all open access microbial genomes, (ii) have relatively small sizes, and (iii) are compatible to various metagenomic read mapping tools. Moreover, computational tools to rapidly compile and update such databases to accommodate the rapid increase in new reference genomes do not exist. As a result, database-guided analyses often fail to profile a substantial fraction of metagenomic shotgun sequencing reads from complex microbiomes. We report pipelines that efficiently traverse all open access microbial genomes and assemble non-redundant genomic information. The pipelines result in two species-resolution microbial reference databases of relatively small sizes: reprDB, which assembles microbial representative or reference genomes, and panDB, for which we developed a novel iterative alignment algorithm to identify and assemble non-redundant genomic regions in multiple sequenced strains. With the databases, we managed to assign taxonomic labels and genome positions to the majority of metagenomic reads from human skin and gut microbiomes, demonstrating a significant improvement over a previous database-guided analysis on the same datasets. reprDB and panDB leverage the rapid increases in the number of open access microbial genomes to more fully profile metagenomic samples. Additionally, the databases exclude redundant sequence information to avoid inflated storage or memory space and indexing or analyzing time. Finally, the novel iterative alignment algorithm significantly increases efficiency in pan-genome identification and can be useful in comparative genomic analyses.

  15. Studies on the Extraction Region of the Type VI RF Driven H- Ion Source

    NASA Astrophysics Data System (ADS)

    McNeely, P.; Bandyopadhyay, M.; Franzen, P.; Heinemann, B.; Hu, C.; Kraus, W.; Riedl, R.; Speth, E.; Wilhelm, R.

    2002-11-01

    IPP Garching has spent several years developing a RF driven H- ion source intended to be an alternative to the current ITER (International Thermonuclear Experimental Reactor) reference design ion source. A RF driven source offers a number of advantages to ITER in terms of reduced costs and maintenance requirements. Although the RF driven ion source has shown itself to be competitive with a standard arc filament ion source for positive ions many questions still remain on the physics behind the production of the H- ion beam extracted from the source. With the improvements that have been implemented to the BATMAN (Bavarian Test Machine for Negative Ions) facility over the last two years it is now possible to study both the extracted ion beam and the plasma in the vicinity of the extraction grid in greater detail. This paper will show the effect of changing the extraction and acceleration voltage on both the current and shape of the beam as measured on the calorimeter some 1.5 m downstream from the source. The extraction voltage required to operate in the plasma limit is 3 kV. The perveance optimum for the extraction system was determined to be 2.2 x 10-6 A/V3/2 and occurs at 2.7 kV extraction voltage. The horizontal and vertical beam half widths vary as a function of the extracted ion current and the horizontal half width is generally smaller than the vertical. The effect of reducing the co-extracted electron current via plasma grid biasing on the H- current extractable and the beam profile from the source is shown. It is possible in the case of a silver contaminated plasma to reduce the co-extracted electron current to 20% of the initial value by applying a bias of 12 V. In the case where argon is present in the plasma, biasing is observed to have minimal effect on the beam half width but in a pure hydrogen plasma the beam half width increases as the bias voltage increases. New Langmuir probe studies that have been carried out parallel to the plasma grid (in the vicinity of the peak of the external magnetic filter field) and changes to source parameters as a function of power, and argon addition are reported. The behaviour of the electron density is different when the plasma is argon seeded showing a strong increase with RF power. The plasma potential is decreased by 2 V when argon is added to the plasma. The effect of the presence of unwanted silver sputtered from the Faraday screen by Ar+ ions on both the source performance and the plasma parameters is also presented. The silver dramatically downgraded source performance in terms of current density and produced an early saturation of current with applied RF power. Recently, collaboration was begun with the Technical University of Augsburg to perform spectroscopic measurements on the Type VI ion source. The final results of this analysis are not yet ready but some interesting initial observations on the gas temperature, disassociation degree and impurity ions will be presented.

  16. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.

    PubMed

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-04-07

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.

  17. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods

    PubMed Central

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-01-01

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459

  18. Source term evaluation for accident transients in the experimental fusion facility ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Virot, F.; Barrachin, M.; Cousin, F.

    2015-03-15

    We have studied the transport and chemical speciation of radio-toxic and toxic species for an event of water ingress in the vacuum vessel of experimental fusion facility ITER with the ASTEC code. In particular our evaluation takes into account an assessed thermodynamic data for the beryllium gaseous species. This study shows that deposited beryllium dusts of atomic Be and Be(OH){sub 2} are formed. It also shows that Be(OT){sub 2} could exist in some conditions in the drain tank. (authors)

  19. Accuracy Improvement for Light-Emitting-Diode-Based Colorimeter by Iterative Algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Pao-Keng

    2011-09-01

    We present a simple algorithm, combining an interpolating method with an iterative calculation, to enhance the resolution of spectral reflectance by removing the spectral broadening effect due to the finite bandwidth of the light-emitting diode (LED) from it. The proposed algorithm can be used to improve the accuracy of a reflective colorimeter using multicolor LEDs as probing light sources and is also applicable to the case when the probing LEDs have different bandwidths in different spectral ranges, to which the powerful deconvolution method cannot be applied.

  20. Closed loop adaptive optics for microscopy without a wavefront sensor

    PubMed Central

    Kner, Peter; Winoto, Lukman; Agard, David A.; Sedat, John W.

    2013-01-01

    A three-dimensional wide-field image of a small fluorescent bead contains more than enough information to accurately calculate the wavefront in the microscope objective back pupil plane using the phase retrieval technique. The phase-retrieved wavefront can then be used to set a deformable mirror to correct the point-spread function (PSF) of the microscope without the use of a wavefront sensor. This technique will be useful for aligning the deformable mirror in a widefield microscope with adaptive optics and could potentially be used to correct aberrations in samples where small fluorescent beads or other point sources are used as reference beacons. Another advantage is the high resolution of the retrieved wavefont as compared with current Shack-Hartmann wavefront sensors. Here we demonstrate effective correction of the PSF in 3 iterations. Starting from a severely aberrated system, we achieve a Strehl ratio of 0.78 and a greater than 10-fold increase in maximum intensity. PMID:24392198

  1. Memory sparing, fast scattering formalism for rigorous diffraction modeling

    NASA Astrophysics Data System (ADS)

    Iff, W.; Kämpfe, T.; Jourlin, Y.; Tishchenko, A. V.

    2017-07-01

    The basics and algorithmic steps of a novel scattering formalism suited for memory sparing and fast electromagnetic calculations are presented. The formalism, called ‘S-vector algorithm’ (by analogy with the known scattering-matrix algorithm), allows the calculation of the collective scattering spectra of individual layered micro-structured scattering objects. A rigorous method of linear complexity is applied to model the scattering at individual layers; here the generalized source method (GSM) resorting to Fourier harmonics as basis functions is used as one possible method of linear complexity. The concatenation of the individual scattering events can be achieved sequentially or in parallel, both having pros and cons. The present development will largely concentrate on a consecutive approach based on the multiple reflection series. The latter will be reformulated into an implicit formalism which will be associated with an iterative solver, resulting in improved convergence. The examples will first refer to 1D grating diffraction for the sake of simplicity and intelligibility, with a final 2D application example.

  2. Gravitational and Magnetic Anomaly Inversion Using a Tree-Based Geometry Representation

    DTIC Science & Technology

    2009-06-01

    find successive mini- ized vectors. Throughout this paper, the term iteration refers to a ingle loop through a stage of the global scheme, not...BOX 12211 RESEARCH TRIANGLE PARK NC 27709-2211 5 NAVAL RESEARCH LAB E R FRANCHI CODE 7100 M H ORR CODE 7120 J A BUCARO CODE 7130

  3. Overview of experimental preparation for the ITER-Like Wall at JET

    NASA Astrophysics Data System (ADS)

    Jet Efda Contributors Brezinsek, S.; Fundamenski, W.; Eich, T.; Coad, J. P.; Giroud, C.; Huber, A.; Jachmich, S.; Joffrin, E.; Krieger, K.; McCormick, K.; Lehnen, M.; Loarer, T.; de La Luna, E.; Maddison, G.; Matthews, G. F.; Mertens, Ph.; Nunes, I.; Philipps, V.; Riccardo, V.; Rubel, M.; Stamp, M. F.; Tsalas, M.

    2011-08-01

    Experiments in JET with carbon-based plasma-facing components have been carried out in preparation of the ITER-Like Wall with beryllium main chamber and full tungsten divertor. The preparatory work was twofold: (i) development of techniques, which ensure safe operation with the new wall and (ii) provision of reference plasmas, which allow a comparison of operation with carbon and metallic wall. (i) Compatibility with the W divertor with respect to energy loads could be achieved in N2 seeded plasmas at high densities and low temperatures, finally approaching partial detachment, with only moderate confinement reduction of 10%. Strike-point sweeping increases the operational space further by re-distributing the load over several components. (ii) Be and C migration to the divertor has been documented with spectroscopy and QMBs under different plasma conditions providing a database which will allow a comparison of the material transport to remote areas with metallic walls. Fuel retention rates of 1.0-2.0 × 1021 D s-1 were obtained as references in accompanied gas balance studies.

  4. Identification of stable areas in unreferenced laser scans for automated geomorphometric monitoring

    NASA Astrophysics Data System (ADS)

    Wujanz, Daniel; Avian, Michael; Krueger, Daniel; Neitzel, Frank

    2018-04-01

    Current research questions in the field of geomorphology focus on the impact of climate change on several processes subsequently causing natural hazards. Geodetic deformation measurements are a suitable tool to document such geomorphic mechanisms, e.g. by capturing a region of interest with terrestrial laser scanners which results in a so-called 3-D point cloud. The main problem in deformation monitoring is the transformation of 3-D point clouds captured at different points in time (epochs) into a stable reference coordinate system. In this contribution, a surface-based registration methodology is applied, termed the iterative closest proximity algorithm (ICProx), that solely uses point cloud data as input, similar to the iterative closest point algorithm (ICP). The aim of this study is to automatically classify deformations that occurred at a rock glacier and an ice glacier, as well as in a rockfall area. For every case study, two epochs were processed, while the datasets notably differ in terms of geometric characteristics, distribution and magnitude of deformation. In summary, the ICProx algorithm's classification accuracy is 70 % on average in comparison to reference data.

  5. Tuning without over-tuning: parametric uncertainty quantification for the NEMO ocean model

    NASA Astrophysics Data System (ADS)

    Williamson, Daniel B.; Blaker, Adam T.; Sinha, Bablu

    2017-04-01

    In this paper we discuss climate model tuning and present an iterative automatic tuning method from the statistical science literature. The method, which we refer to here as iterative refocussing (though also known as history matching), avoids many of the common pitfalls of automatic tuning procedures that are based on optimisation of a cost function, principally the over-tuning of a climate model due to using only partial observations. This avoidance comes by seeking to rule out parameter choices that we are confident could not reproduce the observations, rather than seeking the model that is closest to them (a procedure that risks over-tuning). We comment on the state of climate model tuning and illustrate our approach through three waves of iterative refocussing of the NEMO (Nucleus for European Modelling of the Ocean) ORCA2 global ocean model run at 2° resolution. We show how at certain depths the anomalies of global mean temperature and salinity in a standard configuration of the model exceeds 10 standard deviations away from observations and show the extent to which this can be alleviated by iterative refocussing without compromising model performance spatially. We show how model improvements can be achieved by simultaneously perturbing multiple parameters, and illustrate the potential of using low-resolution ensembles to tune NEMO ORCA configurations at higher resolutions.

  6. Physics design of the in-vessel collection optics for the ITER electron cyclotron emission diagnostic.

    PubMed

    Rowan, W L; Houshmandyar, S; Phillips, P E; Austin, M E; Beno, J H; Hubbard, A E; Khodak, A; Ouroua, A; Taylor, G

    2016-11-01

    Measurement of the electron cyclotron emission (ECE) is one of the primary diagnostics for electron temperature in ITER. In-vessel, in-vacuum, and quasi-optical antennas capture sufficient ECE to achieve large signal to noise with microsecond temporal resolution and high spatial resolution while maintaining polarization fidelity. Two similar systems are required. One views the plasma radially. The other is an oblique view. Both views can be used to measure the electron temperature, while the oblique is also sensitive to non-thermal distortion in the bulk electron distribution. The in-vacuum optics for both systems are subject to degradation as they have a direct view of the ITER plasma and will not be accessible for cleaning or replacement for extended periods. Blackbody radiation sources are provided for in situ calibration.

  7. Physics design of the in-vessel collection optics for the ITER electron cyclotron emission diagnostic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowan, W. L., E-mail: w.l.rowan@austin.utexas.edu; Houshmandyar, S.; Phillips, P. E.

    2016-11-15

    Measurement of the electron cyclotron emission (ECE) is one of the primary diagnostics for electron temperature in ITER. In-vessel, in-vacuum, and quasi-optical antennas capture sufficient ECE to achieve large signal to noise with microsecond temporal resolution and high spatial resolution while maintaining polarization fidelity. Two similar systems are required. One views the plasma radially. The other is an oblique view. Both views can be used to measure the electron temperature, while the oblique is also sensitive to non-thermal distortion in the bulk electron distribution. The in-vacuum optics for both systems are subject to degradation as they have a direct viewmore » of the ITER plasma and will not be accessible for cleaning or replacement for extended periods. Blackbody radiation sources are provided for in situ calibration.« less

  8. Physics design of the in-vessel collection optics for the ITER electron cyclotron emission diagnostic

    DOE PAGES

    Rowan, W. L.; Houshmandyar, S.; Phillips, P. E.; ...

    2016-09-07

    Measurement of the electron cyclotron emission (ECE) is one of the primary diagnostics for electron temperature in ITER. In-vessel, in-vacuum, and quasi-optical antennas capture sufficient ECE to achieve large signal to noise with microsecond temporal resolution and high spatial resolution while maintaining polarization fidelity. Two similar systems are required. One views the plasma radially. The other is an oblique view. Both views can be used to measure the electron temperature, while the oblique is also sensitive to non-thermal distortion in the bulk electron distribution. The in-vacuum optics for both systems are subject to degradation as they have a direct viewmore » of the ITER plasma and will not be accessible for cleaning or replacement for extended periods. Here, blackbody radiation sources are provided for in situ calibration.« less

  9. Conceptual Design of the ITER ECE Diagnostic - An Update

    NASA Astrophysics Data System (ADS)

    Austin, M. E.; Pandya, H. K. B.; Beno, J.; Bryant, A. D.; Danani, S.; Ellis, R. F.; Feder, R.; Hubbard, A. E.; Kumar, S.; Ouroua, A.; Phillips, P. E.; Rowan, W. L.

    2012-09-01

    The ITER ECE diagnostic has recently been through a conceptual design review for the entire system including front end optics, transmission line, and back-end instruments. The basic design of two viewing lines, each with a single ellipsoidal mirror focussing into the plasma near the midplane of the typical operating scenarios is agreed upon. The location and design of the hot calibration source and the design of the shutter that directs its radiation to the transmission line are issues that need further investigation. In light of recent measurements and discussion, the design of the broadband transmission line is being revisited and new options contemplated. For the instruments, current systems for millimeter wave radiometers and broad-band spectrometers will be adequate for ITER, but the option for employing new state-of-the-art techniques will be left open.

  10. Adaptive Statistical Iterative Reconstruction-Applied Ultra-Low-Dose CT with Radiography-Comparable Radiation Dose: Usefulness for Lung Nodule Detection.

    PubMed

    Yoon, Hyun Jung; Chung, Myung Jin; Hwang, Hye Sun; Moon, Jung Won; Lee, Kyung Soo

    2015-01-01

    To assess the performance of adaptive statistical iterative reconstruction (ASIR)-applied ultra-low-dose CT (ULDCT) in detecting small lung nodules. Thirty patients underwent both ULDCT and standard dose CT (SCT). After determining the reference standard nodules, five observers, blinded to the reference standard reading results, independently evaluated SCT and both subsets of ASIR- and filtered back projection (FBP)-driven ULDCT images. Data assessed by observers were compared statistically. Converted effective doses in SCT and ULDCT were 2.81 ± 0.92 and 0.17 ± 0.02 mSv, respectively. A total of 114 lung nodules were detected on SCT as a standard reference. There was no statistically significant difference in sensitivity between ASIR-driven ULDCT and SCT for three out of the five observers (p = 0.678, 0.735, < 0.01, 0.038, and < 0.868 for observers 1, 2, 3, 4, and 5, respectively). The sensitivity of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT in three out of the five observers (p < 0.01 for three observers, and p = 0.064 and 0.146 for two observers). In jackknife alternative free-response receiver operating characteristic analysis, the mean values of figure-of-merit (FOM) for FBP, ASIR-driven ULDCT, and SCT were 0.682, 0.772, and 0.821, respectively, and there were no significant differences in FOM values between ASIR-driven ULDCT and SCT (p = 0.11), but the FOM value of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT and SCT (p = 0.01 and 0.00). Adaptive statistical iterative reconstruction-driven ULDCT delivering a radiation dose of only 0.17 mSv offers acceptable sensitivity in nodule detection compared with SCT and has better performance than FBP-driven ULDCT.

  11. Adaptive Statistical Iterative Reconstruction-Applied Ultra-Low-Dose CT with Radiography-Comparable Radiation Dose: Usefulness for Lung Nodule Detection

    PubMed Central

    Yoon, Hyun Jung; Hwang, Hye Sun; Moon, Jung Won; Lee, Kyung Soo

    2015-01-01

    Objective To assess the performance of adaptive statistical iterative reconstruction (ASIR)-applied ultra-low-dose CT (ULDCT) in detecting small lung nodules. Materials and Methods Thirty patients underwent both ULDCT and standard dose CT (SCT). After determining the reference standard nodules, five observers, blinded to the reference standard reading results, independently evaluated SCT and both subsets of ASIR- and filtered back projection (FBP)-driven ULDCT images. Data assessed by observers were compared statistically. Results Converted effective doses in SCT and ULDCT were 2.81 ± 0.92 and 0.17 ± 0.02 mSv, respectively. A total of 114 lung nodules were detected on SCT as a standard reference. There was no statistically significant difference in sensitivity between ASIR-driven ULDCT and SCT for three out of the five observers (p = 0.678, 0.735, < 0.01, 0.038, and < 0.868 for observers 1, 2, 3, 4, and 5, respectively). The sensitivity of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT in three out of the five observers (p < 0.01 for three observers, and p = 0.064 and 0.146 for two observers). In jackknife alternative free-response receiver operating characteristic analysis, the mean values of figure-of-merit (FOM) for FBP, ASIR-driven ULDCT, and SCT were 0.682, 0.772, and 0.821, respectively, and there were no significant differences in FOM values between ASIR-driven ULDCT and SCT (p = 0.11), but the FOM value of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT and SCT (p = 0.01 and 0.00). Conclusion Adaptive statistical iterative reconstruction-driven ULDCT delivering a radiation dose of only 0.17 mSv offers acceptable sensitivity in nodule detection compared with SCT and has better performance than FBP-driven ULDCT. PMID:26357505

  12. Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation

    NASA Astrophysics Data System (ADS)

    Litaker, Eric T.

    1994-12-01

    The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.

  13. Optimized x-ray source scanning trajectories for iterative reconstruction in high cone-angle tomography

    NASA Astrophysics Data System (ADS)

    Kingston, Andrew M.; Myers, Glenn R.; Latham, Shane J.; Li, Heyang; Veldkamp, Jan P.; Sheppard, Adrian P.

    2016-10-01

    With the GPU computing becoming main-stream, iterative tomographic reconstruction (IR) is becoming a com- putationally viable alternative to traditional single-shot analytical methods such as filtered back-projection. IR liberates one from the continuous X-ray source trajectories required for analytical reconstruction. We present a family of novel X-ray source trajectories for large-angle CBCT. These discrete (sparsely sampled) trajectories optimally fill the space of possible source locations by maximising the degree of mutually independent information. They satisfy a discrete equivalent of Tuy's sufficiency condition and allow high cone-angle (high-flux) tomog- raphy. The highly isotropic nature of the trajectory has several advantages: (1) The average source distance is approximately constant throughout the reconstruction volume, thus avoiding the differential-magnification artefacts that plague high cone-angle helical computed tomography; (2) Reduced streaking artifacts due to e.g. X-ray beam-hardening; (3) Misalignment and component motion manifests as blur in the tomogram rather than double-edges, which is easier to automatically correct; (4) An approximately shift-invariant point-spread-function which enables filtering as a pre-conditioner to speed IR convergence. We describe these space-filling trajectories and demonstrate their above-mentioned properties compared with a traditional helical trajectories.

  14. An adaptive Bayesian inference algorithm to estimate the parameters of a hazardous atmospheric release

    NASA Astrophysics Data System (ADS)

    Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques

    2015-12-01

    In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.

  15. No-reference image quality assessment for horizontal-path imaging scenarios

    NASA Astrophysics Data System (ADS)

    Rios, Carlos; Gladysz, Szymon

    2013-05-01

    There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.

  16. Feature Based Retention Time Alignment for Improved HDX MS Analysis

    NASA Astrophysics Data System (ADS)

    Venable, John D.; Scuba, William; Brock, Ansgar

    2013-04-01

    An algorithm for retention time alignment of mass shifted hydrogen-deuterium exchange (HDX) data based on an iterative distance minimization procedure is described. The algorithm performs pairwise comparisons in an iterative fashion between a list of features from a reference file and a file to be time aligned to calculate a retention time mapping function. Features are characterized by their charge, retention time and mass of the monoisotopic peak. The algorithm is able to align datasets with mass shifted features, which is a prerequisite for aligning hydrogen-deuterium exchange mass spectrometry datasets. Confidence assignments from the fully automated processing of a commercial HDX software package are shown to benefit significantly from retention time alignment prior to extraction of deuterium incorporation values.

  17. AMLSA Algorithm for Hybrid Precoding in Millimeter Wave MIMO Systems

    NASA Astrophysics Data System (ADS)

    Liu, Fulai; Sun, Zhenxing; Du, Ruiyan; Bai, Xiaoyu

    2017-10-01

    In this paper, an effective algorithm will be proposed for hybrid precoding in mmWave MIMO systems, referred to as alternating minimization algorithm with the least squares amendment (AMLSA algorithm). To be specific, for the fully-connected structure, the presented algorithm is exploited to minimize the classical objective function and obtain the hybrid precoding matrix. It introduces an orthogonal constraint to the digital precoding matrix which is amended subsequently by the least squares after obtaining its alternating minimization iterative result. Simulation results confirm that the achievable spectral efficiency of our proposed algorithm is better to some extent than that of the existing algorithm without the least squares amendment. Furthermore, the number of iterations is reduced slightly via improving the initialization procedure.

  18. Specification and estimation of sources of bias affecting neurological studies in PET/MR with an anatomical brain phantom

    NASA Astrophysics Data System (ADS)

    Teuho, J.; Johansson, J.; Linden, J.; Saunavaara, V.; Tolvanen, T.; Teräs, M.

    2014-01-01

    Selection of reconstruction parameters has an effect on the image quantification in PET, with an additional contribution from a scanner-specific attenuation correction method. For achieving comparable results in inter- and intra-center comparisons, any existing quantitative differences should be identified and compensated for. In this study, a comparison between PET, PET/CT and PET/MR is performed by using an anatomical brain phantom, to identify and measure the amount of bias caused due to differences in reconstruction and attenuation correction methods especially in PET/MR. Differences were estimated by using visual, qualitative and quantitative analysis. The qualitative analysis consisted of a line profile analysis for measuring the reproduction of anatomical structures and the contribution of the amount of iterations to image contrast. The quantitative analysis consisted of measurement and comparison of 10 anatomical VOIs, where the HRRT was considered as the reference. All scanners reproduced the main anatomical structures of the phantom adequately, although the image contrast on the PET/MR was inferior when using a default clinical brain protocol. Image contrast was improved by increasing the amount of iterations from 2 to 5 while using 33 subsets. Furthermore, a PET/MR-specific bias was detected, which resulted in underestimation of the activity values in anatomical structures closest to the skull, due to the MR-derived attenuation map that ignores the bone. Thus, further improvements for the PET/MR reconstruction and attenuation correction could be achieved by optimization of RAMLA-specific reconstruction parameters and implementation of bone to the attenuation template.

  19. Image quality improvements using adaptive statistical iterative reconstruction for evaluating chronic myocardial infarction using iodine density images with spectral CT.

    PubMed

    Kishimoto, Junichi; Ohta, Yasutoshi; Kitao, Shinichiro; Watanabe, Tomomi; Ogawa, Toshihide

    2018-04-01

    Single-source dual-energy CT (ssDECT) allows the reconstruction of iodine density images (IDIs) from projection based computing. We hypothesized that adding adaptive statistical iterative reconstruction (ASiR) could improve image quality. The aim of our study was to evaluate the effect and determine the optimal blend percentages of ASiR for IDI of myocardial late iodine enhancement (LIE) in the evaluation of chronic myocardial infarction using ssDECT. A total of 28 patients underwent cardiac LIE using a ssDECT scanner. IDIs between 0 and 100% of ASiR contributions in 10% increments were reconstructed. The signal-to-noise ratio (SNR) of remote myocardia and the contrast-to-noise ratio (CNR) of infarcted myocardia were measured. Transmural extent of infarction was graded using a 5-point scale. The SNR, CNR, and transmural extent were assessed for each ASiR contribution ratio. The transmural extents were compared with MRI as a reference standard. Compared to 0% ASiR, the use of 20-100% ASiR resulted in a reduction of image noise (p < 0.01) without significant differences in the signal. Compared with 0% ASiR images, reconstruction with 100% ASiR image showed the highest improvement in SNR (229%; p < 0.001) and CNR (199%; p < 0.001). ASiR above 80% showed the highest ratio (73.7%) of accurate transmural extent classification. In conclusion, ASiR intensity of 80-100% in IDIs can improve image quality without changes in signal and maximizes the accuracy of transmural extent in infarcted myocardium.

  20. Dependence of Adaptive Cross-correlation Algorithm Performance on the Extended Scene Image Quality

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2008-01-01

    Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.

  1. Maximum-likelihood-based extended-source spatial acquisition and tracking for planetary optical communications

    NASA Astrophysics Data System (ADS)

    Tsou, Haiping; Yan, Tsun-Yee

    1999-04-01

    This paper describes an extended-source spatial acquisition and tracking scheme for planetary optical communications. This scheme uses the Sun-lit Earth image as the beacon signal, which can be computed according to the current Sun-Earth-Probe angle from a pre-stored Earth image or a received snapshot taken by other Earth-orbiting satellite. Onboard the spacecraft, the reference image is correlated in the transform domain with the received image obtained from a detector array, which is assumed to have each of its pixels corrupted by an independent additive white Gaussian noise. The coordinate of the ground station is acquired and tracked, respectively, by an open-loop acquisition algorithm and a closed-loop tracking algorithm derived from the maximum likelihood criterion. As shown in the paper, the optimal spatial acquisition requires solving two nonlinear equations, or iteratively solving their linearized variants, to estimate the coordinate when translation in the relative positions of onboard and ground transceivers is considered. Similar assumption of linearization leads to the closed-loop spatial tracking algorithm in which the loop feedback signals can be derived from the weighted transform-domain correlation. Numerical results using a sample Sun-lit Earth image demonstrate that sub-pixel resolutions can be achieved by this scheme in a high disturbance environment.

  2. Rapid iterative reanalysis for automated design

    NASA Technical Reports Server (NTRS)

    Bhatia, K. G.

    1973-01-01

    A method for iterative reanalysis in automated structural design is presented for a finite-element analysis using the direct stiffness approach. A basic feature of the method is that the generalized stiffness and inertia matrices are expressed as functions of structural design parameters, and these generalized matrices are expanded in Taylor series about the initial design. Only the linear terms are retained in the expansions. The method is approximate because it uses static condensation, modal reduction, and the linear Taylor series expansions. The exact linear representation of the expansions of the generalized matrices is also described and a basis for the present method is established. Results of applications of the present method to the recalculation of the natural frequencies of two simple platelike structural models are presented and compared with results obtained by using a commonly applied analysis procedure used as a reference. In general, the results are in good agreement. A comparison of the computer times required for the use of the present method and the reference method indicated that the present method required substantially less time for reanalysis. Although the results presented are for relatively small-order problems, the present method will become more efficient relative to the reference method as the problem size increases. An extension of the present method to static reanalysis is described, ana a basis for unifying the static and dynamic reanalysis procedures is presented.

  3. Iterative metal artifact reduction for x-ray computed tomography using unmatched projector/backprojector pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hanming; Wang, Linyuan; Li, Lei

    2016-06-15

    Purpose: Metal artifact reduction (MAR) is a major problem and a challenging issue in x-ray computed tomography (CT) examinations. Iterative reconstruction from sinograms unaffected by metals shows promising potential in detail recovery. This reconstruction has been the subject of much research in recent years. However, conventional iterative reconstruction methods easily introduce new artifacts around metal implants because of incomplete data reconstruction and inconsistencies in practical data acquisition. Hence, this work aims at developing a method to suppress newly introduced artifacts and improve the image quality around metal implants for the iterative MAR scheme. Methods: The proposed method consists of twomore » steps based on the general iterative MAR framework. An uncorrected image is initially reconstructed, and the corresponding metal trace is obtained. The iterative reconstruction method is then used to reconstruct images from the unaffected sinogram. In the reconstruction step of this work, an iterative strategy utilizing unmatched projector/backprojector pairs is used. A ramp filter is introduced into the back-projection procedure to restrain the inconsistency components in low frequencies and generate more reliable images of the regions around metals. Furthermore, a constrained total variation (TV) minimization model is also incorporated to enhance efficiency. The proposed strategy is implemented based on an iterative FBP and an alternating direction minimization (ADM) scheme, respectively. The developed algorithms are referred to as “iFBP-TV” and “TV-FADM,” respectively. Two projection-completion-based MAR methods and three iterative MAR methods are performed simultaneously for comparison. Results: The proposed method performs reasonably on both simulation and real CT-scanned datasets. This approach could reduce streak metal artifacts effectively and avoid the mentioned effects in the vicinity of the metals. The improvements are evaluated by inspecting regions of interest and by comparing the root-mean-square errors, normalized mean absolute distance, and universal quality index metrics of the images. Both iFBP-TV and TV-FADM methods outperform other counterparts in all cases. Unlike the conventional iterative methods, the proposed strategy utilizing unmatched projector/backprojector pairs shows excellent performance in detail preservation and prevention of the introduction of new artifacts. Conclusions: Qualitative and quantitative evaluations of experimental results indicate that the developed method outperforms classical MAR algorithms in suppressing streak artifacts and preserving the edge structural information of the object. In particular, structures lying close to metals can be gradually recovered because of the reduction of artifacts caused by inconsistency effects.« less

  4. Using Mendeley to Support Collaborative Learning in the Classroom

    ERIC Educational Resources Information Center

    Khwaja, Tehmina; Eddy, Pamela L.

    2015-01-01

    The purpose of this study was to explore the use of Mendeley, a free online reference management and academic networking software, as a collaborative tool in the college classroom. Students in two iterations of a Graduate class used Mendeley to collaborate on a policy research project over the course of a semester. The project involved…

  5. Online Learner Self-Regulation: Learning Presence Viewed through Quantitative Content- and Social Network Analysis

    ERIC Educational Resources Information Center

    Shea, Peter; Hayes, Suzanne; Smith, Sedef Uzuner; Vickers, Jason; Bidjerano, Temi; Gozza-Cohen, Mary; Jian, Shou-Bang; Pickett, Alexandra M.; Wilde, Jane; Tseng, Chi-Hua

    2013-01-01

    This paper presents an extension of an ongoing study of online learning framed within the community of inquiry (CoI) model (Garrison, Anderson, & Archer, 2001) in which we further examine a new construct labeled as "learning presence." We use learning presence to refer to the iterative processes of forethought and planning,…

  6. Adaptive reference update (ARU) algorithm. A stochastic search algorithm for efficient optimization of multi-drug cocktails

    PubMed Central

    2012-01-01

    Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742

  7. The Cadarache negative ion experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massmann, P.; Bottereau, J.M.; Belchenko, Y.

    1995-12-31

    Up to energies of 140 keV neutral beam injection (NBI) based on positive ions has proven to be a reliable and flexible plasma heating method and has provided major contributions to most of the important experiments on virtually all large tokamaks around the world. As a candidate for additional heating and current drive on next step fusion machines (ITER ao) it is hoped that NBI can be equally successful. The ITER NBI parameters of 1 MeV, 50 MW D{degree} demand primary D{sup {minus}} beams with current densities of at least 15 mA/cm{sup 2}. Although considerable progress has been made inmore » the area of negative ion production and acceleration the high demands still require substantial and urgent development. Regarding negative ion production Cs seeded plasma sources lead the way. Adding a small amount of Cs to the discharge (Cs seeding) not only increases the negative ion yield by a factor 3--5 but also has the advantage that the discharge can be run at lower pressures. This is beneficial for the reduction of stripping losses in the accelerator. Multi-ampere negative ion production in a large plasma source is studied in the MANTIS experiment. Acceleration and neutralization at ITER relevant parameters is the objective of the 1 MV SINGAP experiment.« less

  8. Progress of the ELISE test facility: towards one hour pulses in hydrogen

    NASA Astrophysics Data System (ADS)

    Wünderlich, D.; Fantz, U.; Heinemann, B.; Kraus, W.; Riedl, R.; Wimmer, C.; the NNBI Team

    2016-10-01

    In order to fulfil the ITER requirements, the negative hydrogen ion source used for NBI has to deliver a high source performance, i.e. a high extracted negative ion current and simultaneously a low co-extracted electron current over a pulse length up to 1 h. Negative ions will be generated by the surface process in a low-temperature low-pressure hydrogen or deuterium plasma. Therefore, a certain amount of caesium has to be deposited on the plasma grid in order to obtain a low surface work function and consequently a high negative ion production yield. This caesium is re-distributed by the influence of the plasma, resulting in temporal instabilities of the extracted negative ion current and the co-extracted electrons over long pulses. This paper describes experiments performed in hydrogen operation at the half-ITER-size NNBI test facility ELISE in order to develop a caesium conditioning technique for more stable long pulses at an ITER relevant filling pressure of 0.3 Pa. A significant improvement of the long pulse stability is achieved. Together with different plasma diagnostics it is demonstrated that this improvement is correlated to the interplay of very small variations of parameters like the electrostatic potential and the particle densities close to the extraction system.

  9. Improved bioluminescence and fluorescence reconstruction algorithms using diffuse optical tomography, normalized data, and optimized selection of the permissible source region

    PubMed Central

    Naser, Mohamed A.; Patterson, Michael S.

    2011-01-01

    Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647

  10. Performance Analysis of the ITER Plasma Position Reflectometry (PPR) Ex-vessel Transmission Lines

    NASA Astrophysics Data System (ADS)

    Martínez-Fernández, J.; Simonetto, A.; Cappa, Á.; Rincón, M. E.; Cabrera, S.; Ramos, F. J.

    2018-03-01

    As the design of the ITER Plasma Position Reflectometry (PPR) diagnostic progresses, some segments of the transmission line have become fully specified and estimations of their performance can already be obtained. This work presents the calculations carried out for the longest section of the PPR, which is in final state of design and will be the main contributor to the total system performance. Considering the 88.9 mm circular corrugated waveguide (CCWG) that was previously chosen, signal degradation calculations have been performed. Different degradation sources have been studied: ohmic attenuation losses for CCWG; mode conversion losses for gaps, mitre bends, waveguide sag and different types of misalignments; reflection and absorption losses due to microwave windows and coupling losses to free space Gaussian beam. Contributions from all these sources have been integrated to give a global estimation of performance in the transmission lines segments under study.

  11. Accurate tissue characterization in low-dose CT imaging with pure iterative reconstruction.

    PubMed

    Murphy, Kevin P; McLaughlin, Patrick D; Twomey, Maria; Chan, Vincent E; Moloney, Fiachra; Fung, Adrian J; Chan, Faimee E; Kao, Tafline; O'Neill, Siobhan B; Watson, Benjamin; O'Connor, Owen J; Maher, Michael M

    2017-04-01

    We assess the ability of low-dose hybrid iterative reconstruction (IR) and 'pure' model-based IR (MBIR) images to maintain accurate Hounsfield unit (HU)-determined tissue characterization. Standard-protocol (SP) and low-dose modified-protocol (MP) CTs were contemporaneously acquired in 34 Crohn's disease patients referred for CT. SP image reconstruction was via the manufacturer's recommendations (60% FBP, filtered back projection; 40% ASiR, Adaptive Statistical iterative Reconstruction; SP-ASiR40). MP data sets underwent four reconstructions (100% FBP; 40% ASiR; 70% ASiR; MBIR). Three observers measured tissue volumes using HU thresholds for fat, soft tissue and bone/contrast on each data set. Analysis was via SPSS. Inter-observer agreement was strong for 1530 datapoints (rs > 0.9). MP-MBIR tissue volume measurement was superior to other MP reconstructions and closely correlated with the reference SP-ASiR40 images for all tissue types. MP-MBIR superiority was most marked for fat volume calculation - close SP-ASiR40 and MP-MBIR Bland-Altman plot correlation was seen with the lowest average difference (336 cm 3 ) when compared with other MP reconstructions. Hounsfield unit-determined tissue volume calculations from MP-MBIR images resulted in values comparable to SP-ASiR40 calculations and values that are superior to MP-ASiR images. Accuracy of estimation of volume of tissues (e.g. fat) using segmentation software on low-dose CT images appears optimal when reconstructed with pure IR. © 2016 The Royal Australian and New Zealand College of Radiologists.

  12. Investigation of the boundary layer during the transition from volume to surface dominated H- production at the BATMAN test facility

    NASA Astrophysics Data System (ADS)

    Wimmer, C.; Schiesko, L.; Fantz, U.

    2016-02-01

    BATMAN (Bavarian Test Machine for Negative ions) is a test facility equipped with a 1/8 scale H- source for the ITER heating neutral beam injection. Several diagnostics in the boundary layer close to the plasma grid (first grid of the accelerator system) followed the transition from volume to surface dominated H- production starting with a Cs-free, cleaned source and subsequent evaporation of caesium, while the source has been operated at ITER relevant pressure of 0.3 Pa: Langmuir probes are used to determine the plasma potential, optical emission spectroscopy is used to follow the caesiation process, and cavity ring-down spectroscopy allows for the measurement of the H- density. The influence on the plasma during the transition from an electron-ion plasma towards an ion-ion plasma, in which negative hydrogen ions become the dominant negatively charged particle species, is seen in a strong increase of the H- density combined with a reduction of the plasma potential. A clear correlation of the extracted current densities (jH-, je) exists with the Cs emission.

  13. Investigation of the boundary layer during the transition from volume to surface dominated H⁻ production at the BATMAN test facility.

    PubMed

    Wimmer, C; Schiesko, L; Fantz, U

    2016-02-01

    BATMAN (Bavarian Test Machine for Negative ions) is a test facility equipped with a 18 scale H(-) source for the ITER heating neutral beam injection. Several diagnostics in the boundary layer close to the plasma grid (first grid of the accelerator system) followed the transition from volume to surface dominated H(-) production starting with a Cs-free, cleaned source and subsequent evaporation of caesium, while the source has been operated at ITER relevant pressure of 0.3 Pa: Langmuir probes are used to determine the plasma potential, optical emission spectroscopy is used to follow the caesiation process, and cavity ring-down spectroscopy allows for the measurement of the H(-) density. The influence on the plasma during the transition from an electron-ion plasma towards an ion-ion plasma, in which negative hydrogen ions become the dominant negatively charged particle species, is seen in a strong increase of the H(-) density combined with a reduction of the plasma potential. A clear correlation of the extracted current densities (j(H(-)), j(e)) exists with the Cs emission.

  14. Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Li, Y.

    2016-12-01

    We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.

  15. A fast feedback method to design easy-molding freeform optical system with uniform illuminance and high light control efficiency.

    PubMed

    Hongtao, Li; Shichao, Chen; Yanjun, Han; Yi, Luo

    2013-01-14

    A feedback method combined with fitting technique based on variable separation mapping is proposed to design freeform optical systems for an extended LED source with prescribed illumination patterns, especially with uniform illuminance distribution. Feedback process performs well with extended sources, while fitting technique contributes not only to the decrease of pieces of sub-surfaces in discontinuous freeform lenses which may cause loss in manufacture, but also the reduction in the number of feedback iterations. It is proved that light control efficiency can be improved by 5%, while keeping a high uniformity of 82%, with only two feedback iterations and one fitting operation can improve. Furthermore, the polar angle θ and azimuthal angle φ is used to specify the light direction from the light source, and the (θ,φ)-(x,y) based mapping and feedback strategy makes sure that even few discontinuous sections along the equi-φ plane exist in the system, they are perpendicular to the base plane, making it eligible for manufacturing the surfaces using injection molding.

  16. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery.

    PubMed

    Hashemi, SayedMasoud; Song, William Y; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G; Ruschin, Mark

    2017-04-07

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm -1 which was increased to 1.2 mm -1 by SDIR, at half maximum.

  17. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery

    NASA Astrophysics Data System (ADS)

    Hashemi, SayedMasoud; Song, William Y.; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G.; Ruschin, Mark

    2017-04-01

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm-1 which was increased to 1.2 mm-1 by SDIR, at half maximum.

  18. Precise and fast spatial-frequency analysis using the iterative local Fourier transform.

    PubMed

    Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook

    2016-09-19

    The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.

  19. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1990-01-01

    Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure.

  20. Temporal resolution and motion artifacts in single-source and dual-source cardiac CT.

    PubMed

    Schöndube, Harald; Allmendinger, Thomas; Stierstorfer, Karl; Bruder, Herbert; Flohr, Thomas

    2013-03-01

    The temporal resolution of a given image in cardiac computed tomography (CT) has so far mostly been determined from the amount of CT data employed for the reconstruction of that image. The purpose of this paper is to examine the applicability of such measures to the newly introduced modality of dual-source CT as well as to methods aiming to provide improved temporal resolution by means of an advanced image reconstruction algorithm. To provide a solid base for the examinations described in this paper, an extensive review of temporal resolution in conventional single-source CT is given first. Two different measures for assessing temporal resolution with respect to the amount of data involved are introduced, namely, either taking the full width at half maximum of the respective data weighting function (FWHM-TR) or the total width of the weighting function (total TR) as a base of the assessment. Image reconstruction using both a direct fan-beam filtered backprojection with Parker weighting as well as using a parallel-beam rebinning step are considered. The theory of assessing temporal resolution by means of the data involved is then extended to dual-source CT. Finally, three different advanced iterative reconstruction methods that all use the same input data are compared with respect to the resulting motion artifact level. For brevity and simplicity, the examinations are limited to two-dimensional data acquisition and reconstruction. However, all results and conclusions presented in this paper are also directly applicable to both circular and helical cone-beam CT. While the concept of total TR can directly be applied to dual-source CT, the definition of the FWHM of a weighting function needs to be slightly extended to be applicable to this modality. The three different advanced iterative reconstruction methods examined in this paper result in significantly different images with respect to their motion artifact level, despite exactly the same amount of data being used in the reconstruction process. The concept of assessing temporal resolution by means of the data employed for reconstruction can nicely be extended from single-source to dual-source CT. However, for advanced (possibly nonlinear iterative) reconstruction algorithms the examined approach fails to deliver accurate results. New methods and measures to assess the temporal resolution of CT images need to be developed to be able to accurately compare the performance of such algorithms.

  1. PREFACE: Progress in the ITER Physics Basis

    NASA Astrophysics Data System (ADS)

    Ikeda, K.

    2007-06-01

    I would firstly like to congratulate all who have contributed to the preparation of the `Progress in the ITER Physics Basis' (PIPB) on its publication and express my deep appreciation of the hard work and commitment of the many scientists involved. With the signing of the ITER Joint Implementing Agreement in November 2006, the ITER Members have now established the framework for construction of the project, and the ITER Organization has begun work at Cadarache. The review of recent progress in the physics basis for burning plasma experiments encompassed by the PIPB will be a valuable resource for the project and, in particular, for the current Design Review. The ITER design has been derived from a physics basis developed through experimental, modelling and theoretical work on the properties of tokamak plasmas and, in particular, on studies of burning plasma physics. The `ITER Physics Basis' (IPB), published in 1999, has been the reference for the projection methodologies for the design of ITER, but the IPB also highlighted several key issues which needed to be resolved to provide a robust basis for ITER operation. In the intervening period scientists of the ITER Participant Teams have addressed these issues intensively. The International Tokamak Physics Activity (ITPA) has provided an excellent forum for scientists involved in these studies, focusing their work on the high priority physics issues for ITER. Significant progress has been made in many of the issues identified in the IPB and this progress is discussed in depth in the PIPB. In this respect, the publication of the PIPB symbolizes the strong interest and enthusiasm of the plasma physics community for the success of the ITER project, which we all recognize as one of the great scientific challenges of the 21st century. I wish to emphasize my appreciation of the work of the ITPA Coordinating Committee members, who are listed below. Their support and encouragement for the preparation of the PIPB were fundamental to its completion. I am pleased to witness the extensive collaborations, the excellent working relationships and the free exchange of views that have been developed among scientists working on magnetic fusion, and I would particularly like to acknowledge the importance which they assign to ITER in their research. This close collaboration and the spirit of free discussion will be essential to the success of ITER. Finally, the PIPB identifies issues which remain in the projection of burning plasma performance to the ITER scale and in the control of burning plasmas. Continued R&D is therefore called for to reduce the uncertainties associated with these issues and to ensure the efficient operation and exploitation of ITER. It is important that the international fusion community maintains a high level of collaboration in the future to address these issues and to prepare the physics basis for ITER operation. ITPA Coordination Committee R. Stambaugh (Chair of ITPA CC, General Atomics, USA) D.J. Campbell (Previous Chair of ITPA CC, European Fusion Development Agreement—Close Support Unit, ITER Organization) M. Shimada (Co-Chair of ITPA CC, ITER Organization) R. Aymar (ITER International Team, CERN) V. Chuyanov (ITER Organization) J.H. Han (Korea Basic Science Institute, Korea) Y. Huo (Zengzhou University, China) Y.S. Hwang (Seoul National University, Korea) N. Ivanov (Kurchatov Institute, Russia) Y. Kamada (Japan Atomic Energy Agency, Naka, Japan) P.K. Kaw (Institute for Plasma Research, India) S. Konovalov (Kurchatov Institute, Russia) M. Kwon (National Fusion Research Center, Korea) J. Li (Academy of Science, Institute of Plasma Physics, China) S. Mirnov (TRINITI, Russia) Y. Nakamura (National Institute for Fusion Studies, Japan) H. Ninomiya (Japan Atomic Energy Agency, Naka, Japan) E. Oktay (Department of Energy, USA) J. Pamela (European Fusion Development Agreement—Close Support Unit) C. Pan (Southwestern Institute of Physics, China) F. Romanelli (Ente per le Nuove tecnologie, l'Energia e l'Ambiente, Italy and European Fusion Development Agreement—Close Support Unit) N. Sauthoff (Princeton Plasma Physics Laboratory, USA and Oak Ridge National Laboratories, USA) Y. Saxena (Institute for Plasma Research, India) Y. Shimomura (ITER Organization) R. Singh (Institute for Plasma Research, India) S. Takamura (Nagoya University, Japan) K. Toi (National Institute for Fusion Studies, Japan) M. Wakatani (Kyoto University, Japan (deceased)) H. Zohm (Max-Planck-Institut für Plasmaphysik, Garching, Germany)

  2. František Nábělek's Iter Turcico-Persicum 1909-1910 - database and digitized herbarium collection.

    PubMed

    Kempa, Matúš; Edmondson, John; Lack, Hans Walter; Smatanová, Janka; Marhold, Karol

    2016-01-01

    The Czech botanist František Nábělek (1884-1965) explored the Middle East in 1909-1910, visiting what are now Israel, Palestine, Jordan, Syria, Lebanon, Iraq, Bahrain, Iran and Turkey. He described four new genera, 78 species, 69 varieties and 38 forms of vascular plants, most of these in his work Iter Turcico-Persicum (1923-1929). The main herbarium collection of Iter Turcico-Persicum comprises 4163 collection numbers (some with duplicates), altogether 6465 specimens. It is currently deposited in the herbarium SAV. In addition, some fragments and duplicates are found in B, E, W and WU. The whole collection at SAV was recently digitized and both images and metadata are available via web portal www.nabelek.sav.sk, and through JSTOR Global Plants and the Biological Collection Access Service. Most localities were georeferenced and the web portal provides a mapping facility. Annotation of specimens is available via the AnnoSys facility. For each specimen a CETAF stable identifier is provided enabling the correct reference to the image and metadata.

  3. František Nábělek’s Iter Turcico-Persicum 1909–1910 – database and digitized herbarium collection

    PubMed Central

    Kempa, Matúš; Edmondson, John; Lack, Hans Walter; Smatanová, Janka; Marhold, Karol

    2016-01-01

    Abstract The Czech botanist František Nábělek (1884−1965) explored the Middle East in 1909-1910, visiting what are now Israel, Palestine, Jordan, Syria, Lebanon, Iraq, Bahrain, Iran and Turkey. He described four new genera, 78 species, 69 varieties and 38 forms of vascular plants, most of these in his work Iter Turcico-Persicum (1923−1929). The main herbarium collection of Iter Turcico-Persicum comprises 4163 collection numbers (some with duplicates), altogether 6465 specimens. It is currently deposited in the herbarium SAV. In addition, some fragments and duplicates are found in B, E, W and WU. The whole collection at SAV was recently digitized and both images and metadata are available via web portal www.nabelek.sav.sk, and through JSTOR Global Plants and the Biological Collection Access Service. Most localities were georeferenced and the web portal provides a mapping facility. Annotation of specimens is available via the AnnoSys facility. For each specimen a CETAF stable identifier is provided enabling the correct reference to the image and metadata. PMID:28127245

  4. The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.

    PubMed

    Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin

    2007-11-01

    This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.

  5. Development of Vertical Cable Seismic System (3)

    NASA Astrophysics Data System (ADS)

    Asakawa, E.; Murakami, F.; Tsukahara, H.; Mizohata, S.; Ishikawa, K.

    2013-12-01

    The VCS (Vertical Cable Seismic) is one of the reflection seismic methods. It uses hydrophone arrays vertically moored from the seafloor to record acoustic waves generated by surface, deep-towed or ocean bottom sources. Analyzing the reflections from the sub-seabed, we could look into the subsurface structure. Because VCS is an efficient high-resolution 3D seismic survey method for a spatially-bounded area, we proposed the method for the hydrothermal deposit survey tool development program that the Ministry of Education, Culture, Sports, Science and Technology (MEXT) started in 2009. We are now developing a VCS system, including not only data acquisition hardware but data processing and analysis technique. We carried out several VCS surveys combining with surface towed source, deep towed source and ocean bottom source. The water depths of the survey are from 100m up to 2100m. The target of the survey includes not only hydrothermal deposit but oil and gas exploration. Through these experiments, our VCS data acquisition system has been completed. But the data processing techniques are still on the way. One of the most critical issues is the positioning in the water. The uncertainty in the positions of the source and of the hydrophones in water degraded the quality of subsurface image. GPS navigation system are available on sea surface, but in case of deep-towed source or ocean bottom source, the accuracy of shot position with SSBL/USBL is not sufficient for the very high-resolution imaging. We have developed another approach to determine the positions in water using the travel time data from the source to VCS hydrophones. In the data acquisition stage, we estimate the position of VCS location with slant ranging method from the sea surface. The deep-towed source or ocean bottom source is estimated by SSBL/USBL. The water velocity profile is measured by XCTD. After the data acquisition, we pick the first break times of the VCS recorded data. The estimated positions of shot points and receiver points in the field include the errors. We use these data as initial guesses, we invert iteratively shot and receiver positions to match the travel time data. After several iterations we could finally estimate the most probable positions. Integration of the constraint of VCS hydrophone positions, such as the spacing is 10m, can accelerate the convergence of the iterative inversion and improve results. The accuracy of the estimated positions from the travel time date is enough for the VCS data processing.

  6. Description of Existing Data for Integrated Landscape Monitoring in the Puget Sound Basin, Washington

    USGS Publications Warehouse

    Aiello, Danielle P.; Torregrosa, Alicia; Jason, Allyson L.; Fuentes, Tracy L.; Josberger, Edward G.

    2008-01-01

    This report summarizes existing geospatial data and monitoring programs for the Puget Sound Basin in northwestern Washington. This information was assembled as a preliminary data-development task for the U.S. Geological Survey (USGS) Puget Sound Integrated Landscape Monitoring (PSILM) pilot project. The PSILM project seeks to support natural resource decision-making by developing a 'whole system' approach that links ecological processes at the landscape level to the local level (Benjamin and others, 2008). Part of this effort will include building the capacity to provide cumulative information about impacts that cross jurisdictional and regulatory boundaries, such as cumulative effects of land-cover change and shoreline modification, or region-wide responses to climate change. The PSILM project study area is defined as the 23 HUC-8 (hydrologic unit code) catchments that comprise the watersheds that drain into Puget Sound and their near-shore environments. The study area includes 13 counties and more than four million people. One goal of the PSILM geospatial database is to integrate spatial data collected at multiple scales across the Puget Sound Basin marine and terrestrial landscape. The PSILM work plan specifies an iterative process that alternates between tasks associated with data development and tasks associated with research or strategy development. For example, an initial work-plan goal was to delineate the study area boundary. Geospatial data required to address this task included data from ecological regions, watersheds, jurisdictions, and other boundaries. This assemblage of data provided the basis for identifying larger research issues and delineating the study-area boundary based on these research needs. Once the study-area boundary was agreed upon, the next iteration between data development and research activities was guided by questions about data availability, data extent, data abundance, and data types. This report is not intended as an exhaustive compilation of all available geospatial data, rather, it is a collection of information about geospatial data that can be used to help answer the suite of questions posed after the study-area boundary was defined. This information will also be useful to the PSILM team for future project tasks, such as assessing monitoring gaps, exploring monitoring-design strategies, identifying and deriving landscape indicators and metrics, and visual geographic communication. The two main geospatial data types referenced in this report - base-reference layers and monitoring data - originated from numerous and varied sources. In addition to collecting information and metadata about the base-reference layers, the data also were collected for project needs, such as developing maps for visual communication among team members and with outside groups. In contrast, only information about the data was typically required for the monitoring data. The information on base-reference layers and monitoring data included in this report is only as detailed as what was readily available from the sources themselves. Although this report may appear to lack consistency between data records, the varying degree of details contained in this report are merely a reflection of varying source detail. This compilation is just a beginning. All data listed also are being catalogued in spreadsheets and knowledge-management systems. Our efforts are continual as we develop a geospatial catalog for the PSILM pilot project.

  7. Experimental validation of an OSEM-type iterative reconstruction algorithm for inverse geometry computed tomography

    NASA Astrophysics Data System (ADS)

    David, Sabrina; Burion, Steve; Tepe, Alan; Wilfley, Brian; Menig, Daniel; Funk, Tobias

    2012-03-01

    Iterative reconstruction methods have emerged as a promising avenue to reduce dose in CT imaging. Another, perhaps less well-known, advance has been the development of inverse geometry CT (IGCT) imaging systems, which can significantly reduce the radiation dose delivered to a patient during a CT scan compared to conventional CT systems. Here we show that IGCT data can be reconstructed using iterative methods, thereby combining two novel methods for CT dose reduction. A prototype IGCT scanner was developed using a scanning beam digital X-ray system - an inverse geometry fluoroscopy system with a 9,000 focal spot x-ray source and small photon counting detector. 90 fluoroscopic projections or "superviews" spanning an angle of 360 degrees were acquired of an anthropomorphic phantom mimicking a 1 year-old boy. The superviews were reconstructed with a custom iterative reconstruction algorithm, based on the maximum-likelihood algorithm for transmission tomography (ML-TR). The normalization term was calculated based on flat-field data acquired without a phantom. 15 subsets were used, and a total of 10 complete iterations were performed. Initial reconstructed images showed faithful reconstruction of anatomical details. Good edge resolution and good contrast-to-noise properties were observed. Overall, ML-TR reconstruction of IGCT data collected by a bench-top prototype was shown to be viable, which may be an important milestone in the further development of inverse geometry CT.

  8. Effect of automated tube voltage selection, integrated circuit detector and advanced iterative reconstruction on radiation dose and image quality of 3rd generation dual-source aortic CT angiography: An intra-individual comparison.

    PubMed

    Mangold, Stefanie; De Cecco, Carlo N; Wichmann, Julian L; Canstein, Christian; Varga-Szemes, Akos; Caruso, Damiano; Fuller, Stephen R; Bamberg, Fabian; Nikolaou, Konstantin; Schoepf, U Joseph

    2016-05-01

    To compare, on an intra-individual basis, the effect of automated tube voltage selection (ATVS), integrated circuit detector and advanced iterative reconstruction on radiation dose and image quality of aortic CTA studies using 2nd and 3rd generation dual-source CT (DSCT). We retrospectively evaluated 32 patients who had undergone CTA of the entire aorta with both 2nd generation DSCT at 120kV using filtered back projection (FBP) (protocol 1) and 3rd generation DSCT using ATVS, an integrated circuit detector and advanced iterative reconstruction (protocol 2). Contrast-to-noise ratio (CNR) was calculated. Image quality was subjectively evaluated using a five-point scale. Radiation dose parameters were recorded. All studies were considered of diagnostic image quality. CNR was significantly higher with protocol 2 (15.0±5.2 vs 11.0±4.2; p<.0001). Subjective image quality analysis revealed no significant differences for evaluation of attenuation (p=0.08501) but image noise was rated significantly lower with protocol 2 (p=0.0005). Mean tube voltage and effective dose were 94.7±14.1kV and 6.7±3.9mSv with protocol 2; 120±0kV and 11.5±5.2mSv with protocol 1 (p<0.0001, respectively). Aortic CTA performed with 3rd generation DSCT, ATVS, integrated circuit detector, and advanced iterative reconstruction allow a substantial reduction of radiation exposure while improving image quality in comparison to 120kV imaging with FBP. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Model-based iterative reconstruction for reduction of radiation dose in abdominopelvic CT: comparison to adaptive statistical iterative reconstruction.

    PubMed

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2013-12-01

    To evaluate dose reduction and image quality of abdominopelvic computed tomography (CT) reconstructed with model-based iterative reconstruction (MBIR) compared to adaptive statistical iterative reconstruction (ASIR). In this prospective study, 85 patients underwent referential-, low-, and ultralow-dose unenhanced abdominopelvic CT. Images were reconstructed with ASIR for low-dose (L-ASIR) and ultralow-dose CT (UL-ASIR), and with MBIR for ultralow-dose CT (UL-MBIR). Image noise was measured in the abdominal aorta and iliopsoas muscle. Subjective image analyses and a lesion detection study (adrenal nodules) were conducted by two blinded radiologists. A reference standard was established by a consensus panel of two different radiologists using referential-dose CT reconstructed with filtered back projection. Compared to low-dose CT, there was a 63% decrease in dose-length product with ultralow-dose CT. UL-MBIR had significantly lower image noise than L-ASIR and UL-ASIR (all p<0.01). UL-MBIR was significantly better for subjective image noise and streak artifacts than L-ASIR and UL-ASIR (all p<0.01). There were no significant differences between UL-MBIR and L-ASIR in diagnostic acceptability (p>0.65), or diagnostic performance for adrenal nodules (p>0.87). MBIR significantly improves image noise and streak artifacts compared to ASIR, and can achieve radiation dose reduction without severely compromising image quality.

  10. SU-F-I-49: Vendor-Independent, Model-Based Iterative Reconstruction On a Rotating Grid with Coordinate-Descent Optimization for CT Imaging Investigations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, S; Hoffman, J; McNitt-Gray, M

    Purpose: Iterative reconstruction methods show promise for improving image quality and lowering the dose in helical CT. We aim to develop a novel model-based reconstruction method that offers potential for dose reduction with reasonable computation speed and storage requirements for vendor-independent reconstruction from clinical data on a normal desktop computer. Methods: In 2012, Xu proposed reconstructing on rotating slices to exploit helical symmetry and reduce the storage requirements for the CT system matrix. Inspired by this concept, we have developed a novel reconstruction method incorporating the stored-system-matrix approach together with iterative coordinate-descent (ICD) optimization. A penalized-least-squares objective function with amore » quadratic penalty term is solved analytically voxel-by-voxel, sequentially iterating along the axial direction first, followed by the transaxial direction. 8 in-plane (transaxial) neighbors are used for the ICD algorithm. The forward problem is modeled via a unique approach that combines the principle of Joseph’s method with trilinear B-spline interpolation to enable accurate reconstruction with low storage requirements. Iterations are accelerated with multi-CPU OpenMP libraries. For preliminary evaluations, we reconstructed (1) a simulated 3D ellipse phantom and (2) an ACR accreditation phantom dataset exported from a clinical scanner (Definition AS, Siemens Healthcare). Image quality was evaluated in the resolution module. Results: Image quality was excellent for the ellipse phantom. For the ACR phantom, image quality was comparable to clinical reconstructions and reconstructions using open-source FreeCT-wFBP software. Also, we did not observe any deleterious impact associated with the utilization of rotating slices. The system matrix storage requirement was only 4.5GB, and reconstruction time was 50 seconds per iteration. Conclusion: Our reconstruction method shows potential for furthering research in low-dose helical CT, in particular as part of our ongoing development of an acquisition/reconstruction pipeline for generating images under a wide range of conditions. Our algorithm will be made available open-source as “FreeCT-ICD”. NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less

  11. Improved motion correction in PROPELLER by using grouped blades as reference.

    PubMed

    Liu, Zhe; Zhang, Zhe; Ying, Kui; Yuan, Chun; Guo, Hua

    2014-03-01

    To develop a robust reference generation method for improving PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) reconstruction. A new reference generation method, grouped-blade reference (GBR), is proposed for calculating rotation angle and translation shift in PROPELLER. Instead of using a single-blade reference (SBR) or combined-blade reference (CBR), our method classifies blades by their relative correlations and groups similar blades together as the reference to prevent inconsistent data from interfering the correction process. Numerical simulations and in vivo experiments were used to evaluate the performance of GBR for PROPELLER, which was further compared with SBR and CBR in terms of error level and computation cost. Both simulation and in vivo experiments demonstrate that GBR-based PROPELLER provides better correction for random motion or bipolar motion comparing with SBR or CBR. It not only produces images with lower error level but also needs less iteration steps to converge. A grouped-blade for reference selection was investigated for PROPELLER MRI. It helps to improve the accuracy and robustness of motion correction for various motion patterns. Copyright © 2013 Wiley Periodicals, Inc.

  12. A Novel Artificial Bee Colony Algorithm Based on Internal-Feedback Strategy for Image Template Matching

    PubMed Central

    Gong, Li-Gang

    2014-01-01

    Image template matching refers to the technique of locating a given reference image over a source image such that they are the most similar. It is a fundamental mission in the field of visual target recognition. In general, there are two critical aspects of a template matching scheme. One is similarity measurement and the other is best-match location search. In this work, we choose the well-known normalized cross correlation model as a similarity criterion. The searching procedure for the best-match location is carried out through an internal-feedback artificial bee colony (IF-ABC) algorithm. IF-ABC algorithm is highlighted by its effort to fight against premature convergence. This purpose is achieved through discarding the conventional roulette selection procedure in the ABC algorithm so as to provide each employed bee an equal chance to be followed by the onlooker bees in the local search phase. Besides that, we also suggest efficiently utilizing the internal convergence states as feedback guidance for searching intensity in the subsequent cycles of iteration. We have investigated four ideal template matching cases as well as four actual cases using different searching algorithms. Our simulation results show that the IF-ABC algorithm is more effective and robust for this template matching mission than the conventional ABC and two state-of-the-art modified ABC algorithms do. PMID:24892107

  13. Deep mantle structure as a reference frame for movements in and on the Earth

    PubMed Central

    Torsvik, Trond H.; van der Voo, Rob; Doubrovine, Pavel V.; Burke, Kevin; Steinberger, Bernhard; Ashwal, Lewis D.; Trønnes, Reidar G.; Webb, Susan J.; Bull, Abigail L.

    2014-01-01

    Earth’s residual geoid is dominated by a degree-2 mode, with elevated regions above large low shear-wave velocity provinces on the core–mantle boundary beneath Africa and the Pacific. The edges of these deep mantle bodies, when projected radially to the Earth’s surface, correlate with the reconstructed positions of large igneous provinces and kimberlites since Pangea formed about 320 million years ago. Using this surface-to-core–mantle boundary correlation to locate continents in longitude and a novel iterative approach for defining a paleomagnetic reference frame corrected for true polar wander, we have developed a model for absolute plate motion back to earliest Paleozoic time (540 Ma). For the Paleozoic, we have identified six phases of slow, oscillatory true polar wander during which the Earth’s axis of minimum moment of inertia was similar to that of Mesozoic times. The rates of Paleozoic true polar wander (<1°/My) are compatible with those in the Mesozoic, but absolute plate velocities are, on average, twice as high. Our reconstructions generate geologically plausible scenarios, with large igneous provinces and kimberlites sourced from the margins of the large low shear-wave velocity provinces, as in Mesozoic and Cenozoic times. This absolute kinematic model suggests that a degree-2 convection mode within the Earth’s mantle may have operated throughout the entire Phanerozoic. PMID:24889632

  14. Deep mantle structure as a reference frame for movements in and on the Earth.

    PubMed

    Torsvik, Trond H; van der Voo, Rob; Doubrovine, Pavel V; Burke, Kevin; Steinberger, Bernhard; Ashwal, Lewis D; Trønnes, Reidar G; Webb, Susan J; Bull, Abigail L

    2014-06-17

    Earth's residual geoid is dominated by a degree-2 mode, with elevated regions above large low shear-wave velocity provinces on the core-mantle boundary beneath Africa and the Pacific. The edges of these deep mantle bodies, when projected radially to the Earth's surface, correlate with the reconstructed positions of large igneous provinces and kimberlites since Pangea formed about 320 million years ago. Using this surface-to-core-mantle boundary correlation to locate continents in longitude and a novel iterative approach for defining a paleomagnetic reference frame corrected for true polar wander, we have developed a model for absolute plate motion back to earliest Paleozoic time (540 Ma). For the Paleozoic, we have identified six phases of slow, oscillatory true polar wander during which the Earth's axis of minimum moment of inertia was similar to that of Mesozoic times. The rates of Paleozoic true polar wander (<1°/My) are compatible with those in the Mesozoic, but absolute plate velocities are, on average, twice as high. Our reconstructions generate geologically plausible scenarios, with large igneous provinces and kimberlites sourced from the margins of the large low shear-wave velocity provinces, as in Mesozoic and Cenozoic times. This absolute kinematic model suggests that a degree-2 convection mode within the Earth's mantle may have operated throughout the entire Phanerozoic.

  15. Physically consistent data assimilation method based on feedback control for patient-specific blood flow analysis.

    PubMed

    Ii, Satoshi; Adib, Mohd Azrul Hisham Mohd; Watanabe, Yoshiyuki; Wada, Shigeo

    2018-01-01

    This paper presents a novel data assimilation method for patient-specific blood flow analysis based on feedback control theory called the physically consistent feedback control-based data assimilation (PFC-DA) method. In the PFC-DA method, the signal, which is the residual error term of the velocity when comparing the numerical and reference measurement data, is cast as a source term in a Poisson equation for the scalar potential field that induces flow in a closed system. The pressure values at the inlet and outlet boundaries are recursively calculated by this scalar potential field. Hence, the flow field is physically consistent because it is driven by the calculated inlet and outlet pressures, without any artificial body forces. As compared with existing variational approaches, although this PFC-DA method does not guarantee the optimal solution, only one additional Poisson equation for the scalar potential field is required, providing a remarkable improvement for such a small additional computational cost at every iteration. Through numerical examples for 2D and 3D exact flow fields, with both noise-free and noisy reference data as well as a blood flow analysis on a cerebral aneurysm using actual patient data, the robustness and accuracy of this approach is shown. Moreover, the feasibility of a patient-specific practical blood flow analysis is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.

  16. GEM detector development for tokamak plasma radiation diagnostics: SXR poloidal tomography

    NASA Astrophysics Data System (ADS)

    Chernyshova, Maryna; Malinowski, Karol; Ziółkowski, Adam; Kowalska-Strzeciwilk, Ewa; Czarski, Tomasz; Poźniak, Krzysztof T.; Kasprowicz, Grzegorz; Zabołotny, Wojciech; Wojeński, Andrzej; Kolasiński, Piotr; Krawczyk, Rafał D.

    2015-09-01

    An increased attention to tungsten material is related to a fact that it became a main candidate for the plasma facing material in ITER and future fusion reactor. The proposed work refers to the studies of W influence on the plasma performances by developing new detectors based on Gas Electron Multiplier GEM) technology for tomographic studies of tungsten transport in ITER-oriented tokamaks, e.g. WEST project. It presents current stage of design and developing of cylindrically bent SXR GEM detector construction for horizontal port implementation. Concept to overcome an influence of constraints on vertical port has been also presented. It is expected that the detecting unit under development, when implemented, will add to the safe operation of tokamak bringing creation of sustainable nuclear fusion reactors a step closer.

  17. A statistically valid method for using FIA plots to guide spectral class rejection in producing stratification maps

    Treesearch

    Michael L. Hoppus; Andrew J. Lister

    2002-01-01

    A Landsat TM classification method (iterative guided spectral class rejection) produced a forest cover map of southern West Virginia that provided the stratification layer for producing estimates of timberland area from Forest Service FIA ground plots using a stratified sampling technique. These same high quality and expensive FIA ground plots provided ground reference...

  18. The Stokes problem for the ellipsoid using ellipsoidal kernels

    NASA Technical Reports Server (NTRS)

    Zhu, Z.

    1981-01-01

    A brief review of Stokes' problem for the ellipsoid as a reference surface is given. Another solution of the problem using an ellipsoidal kernel, which represents an iterative form of Stokes' integral, is suggested with a relative error of the order of the flattening. On studying of Rapp's method in detail the procedures of improving its convergence are discussed.

  19. Iterative optimizing quantization method for reconstructing three-dimensional images from a limited number of views

    DOEpatents

    Lee, Heung-Rae

    1997-01-01

    A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object.

  20. Flyback CCM inverter for AC module applications: iterative learning control and convergence analysis

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Ho; Kim, Minsung

    2017-12-01

    This paper presents an iterative learning controller (ILC) for an interleaved flyback inverter operating in continuous conduction mode (CCM). The flyback CCM inverter features small output ripple current, high efficiency, and low cost, and hence it is well suited for photovoltaic power applications. However, it exhibits the non-minimum phase behaviour, because its transfer function from control duty to output current has the right-half-plane (RHP) zero. Moreover, the flyback CCM inverter suffers from the time-varying grid voltage disturbance. Thus, conventional control scheme results in inaccurate output tracking. To overcome these problems, the ILC is first developed and applied to the flyback inverter operating in CCM. The ILC makes use of both predictive and current learning terms which help the system output to converge to the reference trajectory. We take into account the nonlinear averaged model and use it to construct the proposed controller. It is proven that the system output globally converges to the reference trajectory in the absence of state disturbances, output noises, or initial state errors. Numerical simulations are performed to validate the proposed control scheme, and experiments using 400-W AC module prototype are carried out to demonstrate its practical feasibility.

  1. Evidence of dose saving in routine CT practice using iterative reconstruction derived from a national diagnostic reference level survey.

    PubMed

    Thomas, P; Hayton, A; Beveridge, T; Marks, P; Wallace, A

    2015-09-01

    To assess the influence and significance of the use of iterative reconstruction (IR) algorithms on patient dose in CT in Australia. We examined survey data submitted to the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) National Diagnostic Reference Level Service (NDRLS) during 2013 and 2014. We compared median survey dose metrics with categorization by scan region and use of IR. The use of IR results in a reduction in volume CT dose index of between 17% and 44% and a reduction in dose-length product of between 14% and 34% depending on the specific scan region. The reduction was highly significant (p < 0.001, Wilcoxon rank-sum test) for all six scan regions included in the NDRLS. Overall, 69% (806/1167) of surveys included in the analysis used IR. The use of IR in CT is achieving dose savings of 20-30% in routine practice in Australia. IR appears to be widely used by participants in the ARPANSA NDRLS with approximately 70% of surveys submitted employing this technique. This study examines the impact of the use of IR on patient dose in CT on a national scale.

  2. Advances in Global Adjoint Tomography - Data Assimilation and Inversion Strategy

    NASA Astrophysics Data System (ADS)

    Ruan, Y.; Lei, W.; Lefebvre, M. P.; Modrak, R. T.; Smith, J. A.; Bozdag, E.; Tromp, J.

    2016-12-01

    Seismic tomography provides the most direct way to understand Earth's interior by imaging elastic heterogeneity, anisotropy and anelasticity. Resolving thefine structure of these properties requires accurate simulations of seismic wave propagation in complex 3-D Earth models. On the supercomputer "Titan" at Oak Ridge National Laboratory, we are employing a spectral-element method (Komatitsch & Tromp 1999, 2002) in combination with an adjoint method (Tromp et al., 2005) to accurately calculate theoretical seismograms and Frechet derivatives. Using 253 carefully selected events, Bozdag et al. (2016) iteratively determined a transversely isotropic earth model (GLAD_M15) using 15 preconditioned conjugate-gradient iterations. To obtain higher resolution images of the mantle, we have expanded our database to more than 4,220 Mw5.0-7.0 events occurred between 1995 and 2014. Instead of using the entire database all at once, we choose to draw subsets of about 1,000 events from our database for each iteration to achieve a faster convergence rate with limited computing resources. To provide good coverage of deep structures, we selected approximately 700 deep and intermedia earthquakes and 300 shallow events to start a new iteration. We reinverted the CMT solutions of these events in the latest model, and recalculated synthetic seismograms. Using the synthetics as reference seismograms, we selected time windows that show good agreement with data and make measurements within the windows. From the measurements we further assess the overall quality of each event and station, and exclude bad measurements base upon certain criteria. So far, with very conservative criteria, we have assimilated more than 8.0 million windows from 1,000 earthquakes in three period bands for the new iteration. For subsequent iterations, we will change the period bands and window selecting criteria to include more window. In the inversion, dense array data (e.g., USArray) usually dominate model updates. In order to better handle this issue, we introduced weighting of stations and events based upon their relative distance and showed that the contribution from dense array is better balanced in the Frechet derivatives. We will present a summary of this form of data assimilation and preliminary results of the first few iterations.

  3. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach.

    PubMed

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H

    2011-04-01

    A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.

  4. A contrast source method for nonlinear acoustic wave fields in media with spatially inhomogeneous attenuation.

    PubMed

    Demi, L; van Dongen, K W A; Verweij, M D

    2011-03-01

    Experimental data reveals that attenuation is an important phenomenon in medical ultrasound. Attenuation is particularly important for medical applications based on nonlinear acoustics, since higher harmonics experience higher attenuation than the fundamental. Here, a method is presented to accurately solve the wave equation for nonlinear acoustic media with spatially inhomogeneous attenuation. Losses are modeled by a spatially dependent compliance relaxation function, which is included in the Westervelt equation. Introduction of absorption in the form of a causal relaxation function automatically results in the appearance of dispersion. The appearance of inhomogeneities implies the presence of a spatially inhomogeneous contrast source in the presented full-wave method leading to inclusion of forward and backward scattering. The contrast source problem is solved iteratively using a Neumann scheme, similar to the iterative nonlinear contrast source (INCS) method. The presented method is directionally independent and capable of dealing with weakly to moderately nonlinear, large scale, three-dimensional wave fields occurring in diagnostic ultrasound. Convergence of the method has been investigated and results for homogeneous, lossy, linear media show full agreement with the exact results. Moreover, the performance of the method is demonstrated through simulations involving steered and unsteered beams in nonlinear media with spatially homogeneous and inhomogeneous attenuation. © 2011 Acoustical Society of America

  5. Simulation of cesium injection and distribution in rf-driven ion sources for negative hydrogen ion generation.

    PubMed

    Gutser, R; Fantz, U; Wünderlich, D

    2010-02-01

    Cesium seeded sources for surface generated negative hydrogen ions are major components of neutral beam injection systems in future large-scale fusion experiments such as ITER. Stability and delivered current density depend highly on the cesium conditions during plasma-on and plasma-off phases of the ion source. The Monte Carlo code CSFLOW3D was used to study the transport of neutral and ionic cesium in both phases. Homogeneous and intense flows were obtained from two cesium sources in the expansion region of the ion source and from a dispenser array, which is located 10 cm in front of the converter surface.

  6. On the meniscus formation and the negative hydrogen ion extraction from ITER neutral beam injection relevant ion source

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Fantz, U.; Franzen, P.; Minea, T.

    2014-10-01

    The development of a large area (Asource,ITER = 0.9 × 2 m2) hydrogen negative ion (NI) source constitutes a crucial step in construction of the neutral beam injectors of the international fusion reactor ITER. To understand the plasma behaviour in the boundary layer close to the extraction system the 3D PIC MCC code ONIX is exploited. Direct cross checked analysis of the simulation and experimental results from the ITER-relevant BATMAN source testbed with a smaller area (Asource,BATMAN ≈ 0.32 × 0.59 m2) has been conducted for a low perveance beam, but for a full set of plasma parameters available. ONIX has been partially benchmarked by comparison to the results obtained using the commercial particle tracing code for positive ion extraction KOBRA3D. Very good agreement has been found in terms of meniscus position and its shape for simulations of different plasma densities. The influence of the initial plasma composition on the final meniscus structure was then investigated for NIs. As expected from the Child-Langmuir law, the results show that not only does the extraction potential play a crucial role on the meniscus formation, but also the initial plasma density and its electronegativity. For the given parameters, the calculated meniscus locates a few mm downstream of the plasma grid aperture provoking a direct NI extraction. Most of the surface produced NIs do not reach the plasma bulk, but move directly towards the extraction grid guided by the extraction field. Even for artificially increased electronegativity of the bulk plasma the extracted NI current from this region is low. This observation indicates a high relevance of the direct NI extraction. These calculations show that the extracted NI current from the bulk region is low even if a complete ion-ion plasma is assumed, meaning that direct extraction from surface produced ions should be present in order to obtain sufficiently high extracted NI current density. The calculated extracted currents, both ions and electrons, agree rather well with the experiment.

  7. A coupled cluster theory with iterative inclusion of triple excitations and associated equation of motion formulation for excitation energy and ionization potential

    NASA Astrophysics Data System (ADS)

    Maitra, Rahul; Akinaga, Yoshinobu; Nakajima, Takahito

    2017-08-01

    A single reference coupled cluster theory that is capable of including the effect of connected triple excitations has been developed and implemented. This is achieved by regrouping the terms appearing in perturbation theory and parametrizing through two different sets of exponential operators: while one of the exponentials, involving general substitution operators, annihilates the ground state but has a non-vanishing effect when it acts on the excited determinant, the other is the regular single and double excitation operator in the sense of conventional coupled cluster theory, which acts on the Hartree-Fock ground state. The two sets of operators are solved as coupled non-linear equations in an iterative manner without significant increase in computational cost than the conventional coupled cluster theory with singles and doubles excitations. A number of physically motivated and computationally advantageous sufficiency conditions are invoked to arrive at the working equations and have been applied to determine the ground state energies of a number of small prototypical systems having weak multi-reference character. With the knowledge of the correlated ground state, we have reconstructed the triple excitation operator and have performed equation of motion with coupled cluster singles, doubles, and triples to obtain the ionization potential and excitation energies of these molecules as well. Our results suggest that this is quite a reasonable scheme to capture the effect of connected triple excitations as long as the ground state remains weakly multi-reference.

  8. Integrated modelling of steady-state scenarios and heating and current drive mixes for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murakami, Masanori; Park, Jin Myung; Giruzzi, G.

    2011-01-01

    Recent progress on ITER steady-state (SS) scenario modelling by the ITPA-IOS group is reviewed. Code-to-code benchmarks as the IOS group's common activities for the two SS scenarios (weak shear scenario and internal transport barrier scenario) are discussed in terms of transport, kinetic profiles, and heating and current drive (CD) sources using various transport codes. Weak magnetic shear scenarios integrate the plasma core and edge by combining a theory-based transport model (GLF23) with scaled experimental boundary profiles. The edge profiles (at normalized radius rho = 0.8-1.0) are adopted from an edge-localized mode-averaged analysis of a DIII-D ITER demonstration discharge. A fullymore » noninductive SS scenario is achieved with fusion gain Q = 4.3, noninductive fraction f(NI) = 100%, bootstrap current fraction f(BS) = 63% and normalized beta beta(N) = 2.7 at plasma current I(p) = 8MA and toroidal field B(T) = 5.3 T using ITER day-1 heating and CD capability. Substantial uncertainties come from outside the radius of setting the boundary conditions (rho = 0.8). The present simulation assumed that beta(N)(rho) at the top of the pedestal (rho = 0.91) is about 25% above the peeling-ballooning threshold. ITER will have a challenge to achieve the boundary, considering different operating conditions (T(e)/T(i) approximate to 1 and density peaking). Overall, the experimentally scaled edge is an optimistic side of the prediction. A number of SS scenarios with different heating and CD mixes in a wide range of conditions were explored by exploiting the weak-shear steady-state solution procedure with the GLF23 transport model and the scaled experimental edge. The results are also presented in the operation space for DT neutron power versus stationary burn pulse duration with assumed poloidal flux availability at the beginning of stationary burn, indicating that the long pulse operation goal (3000s) at I(p) = 9 MA is possible. Source calculations in these simulations have been revised for electron cyclotron current drive including parallel momentum conservation effects and for neutral beam current drive with finite orbit and magnetic pitch effects.« less

  9. Automating Microbial Directed Evolution For Bioengineering Applications

    NASA Astrophysics Data System (ADS)

    Lee, A.; Demachkie, I. S.; Sardesh, N.; Arismendi, D.; Ouandji, C.; Wang, J.; Blaich, J.; Gentry, D.

    2016-12-01

    From a micro-biology perspective, directed evolution is a technique that uses controlled environmental pressures to select for a desired phenotype. Directed evolution has the distinct advantage over rational design of not needing extensive knowledge of the genome or pathways associated with a microorganism to induce phenotypes. However, there are currently limitations to the applicability of this technique including being time-consuming, error-prone, and dependent on existing assays that may lack selectivity for the given phenotype. The AADEC (Autonomous Adaptive Directed Evolution Chamber) system is a proof-of-concept instrument to automate and improve the technique such that directed evolution can be used more effectively as a general bioengineering tool. A series of tests using the automated system and comparable by-hand survival assay measurements have been carried out using UV-C radiation and Escherichia coli cultures in order to demonstrate the advantages of the AADEC versus traditional implementations of directed evolution such as random mutagenesis. AADEC uses UV-C exposure as both a source of environmental stress and mutagenesis, so in order to evaluate the UV-C tolerance obtained from the cultures, a manual UV-C exposure survival assay was developed alongside the device to compare the survival fractions at a fixed dosage. This survival assay involves exposing E.coli to UV-C radiation using a custom-designed exposure hood to control the flux and dose. Surviving cells are counted then transferred to the next iteration and so on for several iterations to calculate the survival fractions for each exposure iteration. This survival assay primarily serves as a baseline for the AADEC device, allowing quantification of the differences between the AADEC system over the manual approach. The primary data of comparison is survival fractions; this is obtained by optical density and plate counts in the manual assay and by optical density growth curve fits pre- and post-exposure in the automated case. This data can then be compiled to calculate trends over the iterations to characterize increasing UV-C resistance of the E.coli strains. The observed trends are statistically indistinguishable through several iterations from both sources.

  10. Data Integration Tool: From Permafrost Data Translation Research Tool to A Robust Research Application

    NASA Astrophysics Data System (ADS)

    Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Strawhacker, C.; Pulsifer, P. L.; Thurmes, N.

    2016-12-01

    The United States National Science Foundation funded PermaData project led by the National Snow and Ice Data Center (NSIDC) with a team from the Global Terrestrial Network for Permafrost (GTN-P) aimed to improve permafrost data access and discovery. We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the GTN-P. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets. Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs. Originally it was written to capture a scientist's personal, iterative, data manipulation and quality control process of visually and programmatically iterating through inconsistent input data, examining it to find problems, adding operations to address the problems, and rerunning until the data could be translated into the GTN-P standard format. Iterative development of this tool led to a Fortran/Python hybrid then, with consideration of users, licensing, version control, packaging, and workflow, to a publically available, robust, usable application. Transitioning to Python allowed the use of open source frameworks for the workflow core and integration with a javascript graphical workflow interface. DIT is targeted to automatically handle 90% of the data processing for field scientists, modelers, and non-discipline scientists. It is available as an open source tool in GitHub packaged for a subset of Mac, Windows, and UNIX systems as a desktop application with a graphical workflow manager. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.

  11. 3D unstructured-mesh radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less

  12. Computational helioseismology in the frequency domain: acoustic waves in axisymmetric solar models with flows

    NASA Astrophysics Data System (ADS)

    Gizon, Laurent; Barucq, Hélène; Duruflé, Marc; Hanson, Chris S.; Leguèbe, Michael; Birch, Aaron C.; Chabassier, Juliette; Fournier, Damien; Hohage, Thorsten; Papini, Emanuele

    2017-04-01

    Context. Local helioseismology has so far relied on semi-analytical methods to compute the spatial sensitivity of wave travel times to perturbations in the solar interior. These methods are cumbersome and lack flexibility. Aims: Here we propose a convenient framework for numerically solving the forward problem of time-distance helioseismology in the frequency domain. The fundamental quantity to be computed is the cross-covariance of the seismic wavefield. Methods: We choose sources of wave excitation that enable us to relate the cross-covariance of the oscillations to the Green's function in a straightforward manner. We illustrate the method by considering the 3D acoustic wave equation in an axisymmetric reference solar model, ignoring the effects of gravity on the waves. The symmetry of the background model around the rotation axis implies that the Green's function can be written as a sum of longitudinal Fourier modes, leading to a set of independent 2D problems. We use a high-order finite-element method to solve the 2D wave equation in frequency space. The computation is embarrassingly parallel, with each frequency and each azimuthal order solved independently on a computer cluster. Results: We compute travel-time sensitivity kernels in spherical geometry for flows, sound speed, and density perturbations under the first Born approximation. Convergence tests show that travel times can be computed with a numerical precision better than one millisecond, as required by the most precise travel-time measurements. Conclusions: The method presented here is computationally efficient and will be used to interpret travel-time measurements in order to infer, e.g., the large-scale meridional flow in the solar convection zone. It allows the implementation of (full-waveform) iterative inversions, whereby the axisymmetric background model is updated at each iteration.

  13. Algorithm for Wavefront Sensing Using an Extended Scene

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Green, Joseph; Ohara, Catherine

    2008-01-01

    A recently conceived algorithm for processing image data acquired by a Shack-Hartmann (SH) wavefront sensor is not subject to the restriction, previously applicable in SH wavefront sensing, that the image be formed from a distant star or other equivalent of a point light source. That is to say, the image could be of an extended scene. (One still has the option of using a point source.) The algorithm can be implemented in commercially available software on ordinary computers. The steps of the algorithm are the following: 1. Suppose that the image comprises M sub-images. Determine the x,y Cartesian coordinates of the centers of these sub-images and store them in a 2xM matrix. 2. Within each sub-image, choose an NxN-pixel cell centered at the coordinates determined in step 1. For the ith sub-image, let this cell be denoted as si(x,y). Let the cell of another subimage (preferably near the center of the whole extended-scene image) be designated a reference cell, denoted r(x,y). 3. Calculate the fast Fourier transforms of the sub-sub-images in the central NxN portions (where N < N and both are preferably powers of 2) of r(x,y) and si(x,y). 4. Multiply the two transforms to obtain a cross-correlation function Ci(u,v), in the Fourier domain. Then let the phase of Ci(u, v) constitute a phase function, phi(u,v). 5. Fit u and v slopes to phi (u,v) over a small u,v subdomain. 6. Compute the fast Fourier transform, Si(u,v) of the full NxN cell si(x,y). Multiply this transform by the u and phase slopes obtained in step 4. Then compute the inverse fast Fourier transform of the product. 7. Repeat steps 4 through 6 in an iteration loop, cumulating the u and slopes, until a maximum iteration number is reached or the change in image shift becomes smaller than a predetermined tolerance. 8. Repeat steps 4 through 7 for the cells of all other sub-images.

  14. Self-induced steady-state magnetic field in the negative ion sources with localized rf power deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shivarova, A.; Todorov, D., E-mail: dimitar-tdrv@phys.uni-sofia-bg; Lishev, St.

    2016-02-15

    The study is in the scope of a recent activity on modeling of SPIDER (Source for Production of Ions of Deuterium Extracted from RF plasma) which is under development regarding the neutral beam injection heating system of ITER. The regime of non-ambipolarity in the source, established before, is completed here by introducing in the model the steady state magnetic field, self-induced in the discharge due to the dc current flowing in it. Strong changes in the discharge structure are reported.

  15. Speckle noise reduction technique for Lidar echo signal based on self-adaptive pulse-matching independent component analysis

    NASA Astrophysics Data System (ADS)

    Xu, Fan; Wang, Jiaxing; Zhu, Daiyin; Tu, Qi

    2018-04-01

    Speckle noise has always been a particularly tricky problem in improving the ranging capability and accuracy of Lidar system especially in harsh environment. Currently, effective speckle de-noising techniques are extremely scarce and should be further developed. In this study, a speckle noise reduction technique has been proposed based on independent component analysis (ICA). Since normally few changes happen in the shape of laser pulse itself, the authors employed the laser source as a reference pulse and executed the ICA decomposition to find the optimal matching position. In order to achieve the self-adaptability of algorithm, local Mean Square Error (MSE) has been defined as an appropriate criterion for investigating the iteration results. The obtained experimental results demonstrated that the self-adaptive pulse-matching ICA (PM-ICA) method could effectively decrease the speckle noise and recover the useful Lidar echo signal component with high quality. Especially, the proposed method achieves 4 dB more improvement of signal-to-noise ratio (SNR) than a traditional homomorphic wavelet method.

  16. On the effect of memory in a quantum prisoner's dilemma cellular automaton

    NASA Astrophysics Data System (ADS)

    Alonso-Sanz, Ramón; Revuelta, Fabio

    2018-03-01

    The disrupting effect of quantum memory on the dynamics of a spatial quantum formulation of the iterated prisoner's dilemma game with variable entangling is studied. The game is played within a cellular automata framework, i.e., with local and synchronous interactions. The main findings of this work refer to the shrinking effect of memory on the disruption induced by noise.

  17. Robust High Data Rate MIMO Underwater Acoustic Communications

    DTIC Science & Technology

    2011-09-30

    We solved it via exploiting FFTs. The extended CAN algorithm is referred to as periodic CAN ( PeCAN ). Unlike most existing sequence construction...methods which are algebraic and deterministic in nature, we start the iteration of PeCAN from random phase initializations and then proceed to...covert UAC applications. We will use PeCAN sequences for more in-water experimentations to demonstrate their effectiveness. Temporal Resampling: In

  18. Alternatives for Developing User Documentation for Applications Software

    DTIC Science & Technology

    1991-09-01

    style that is designed to match adult reading behaviors, using reader-based writing techniques, developing effective graphics , creating reference aids...involves research, analysis, design , and testing. The writer must have a solid understanding of the technical aspects of the document being prepared, good...ABSTRACT The preparation of software documentation is an iterative process that involves research, analysis, design , and testing. The writer must have

  19. DECONVOLUTION OF IMAGES FROM BLAST 2005: INSIGHT INTO THE K3-50 AND IC 5146 STAR-FORMING REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, Arabindo; Netterfield, Calvin B.; Ade, Peter A. R.

    2011-04-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed itsmore » performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4.'5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and {sup 12}CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below {approx} 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.« less

  20. Deconvolution of Images from BLAST 2005: Insight into the K3-50 and IC 5146 Star-forming Regions

    NASA Astrophysics Data System (ADS)

    Roy, Arabindo; Ade, Peter A. R.; Bock, James J.; Brunt, Christopher M.; Chapin, Edward L.; Devlin, Mark J.; Dicker, Simon R.; France, Kevin; Gibb, Andrew G.; Griffin, Matthew; Gundersen, Joshua O.; Halpern, Mark; Hargrave, Peter C.; Hughes, David H.; Klein, Jeff; Marsden, Gaelen; Martin, Peter G.; Mauskopf, Philip; Netterfield, Calvin B.; Olmi, Luca; Patanchon, Guillaume; Rex, Marie; Scott, Douglas; Semisch, Christopher; Truch, Matthew D. P.; Tucker, Carole; Tucker, Gregory S.; Viero, Marco P.; Wiebe, Donald V.

    2011-04-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4farcm5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and 12CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below ~ 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.

  1. Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method

    NASA Astrophysics Data System (ADS)

    Mehl, S.

    2012-12-01

    Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.

  2. Iterative image reconstruction in elastic inhomogenous media with application to transcranial photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Poudel, Joemini; Matthews, Thomas P.; Mitsuhashi, Kenji; Garcia-Uribe, Alejandro; Wang, Lihong V.; Anastasio, Mark A.

    2017-03-01

    Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the photoacoustically induced initial pressure distribution within tissue. The PACT reconstruction problem corresponds to a time-domain inverse source problem, where the initial pressure distribution is recovered from the measurements recorded on an aperture outside the support of the source. A major challenge in transcranial PACT brain imaging is to compensate for aberrations in the measured data due to the propagation of the photoacoustic wavefields through the skull. To properly account for these effects, a wave equation-based inversion method should be employed that can model the heterogeneous elastic properties of the medium. In this study, an iterative image reconstruction method for 3D transcranial PACT is developed based on the elastic wave equation. To accomplish this, a forward model based on a finite-difference time-domain discretization of the elastic wave equation is established. Subsequently, gradient-based methods are employed for computing penalized least squares estimates of the initial source distribution that produced the measured photoacoustic data. The developed reconstruction algorithm is validated and investigated through computer-simulation studies.

  3. Mashups over the Deep Web

    NASA Astrophysics Data System (ADS)

    Hornung, Thomas; Simon, Kai; Lausen, Georg

    Combining information from different Web sources often results in a tedious and repetitive process, e.g. even simple information requests might require to iterate over a result list of one Web query and use each single result as input for a subsequent query. One approach for this chained queries are data-centric mashups, which allow to visually model the data flow as a graph, where the nodes represent the data source and the edges the data flow.

  4. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    USDA-ARS?s Scientific Manuscript database

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  5. Particle model of full-size ITER-relevant negative ion source.

    PubMed

    Taccogna, F; Minelli, P; Ippolito, N

    2016-02-01

    This work represents the first attempt to model the full-size ITER-relevant negative ion source including the expansion, extraction, and part of the acceleration regions keeping the mesh size fine enough to resolve every single aperture. The model consists of a 2.5D particle-in-cell Monte Carlo collision representation of the plane perpendicular to the filter field lines. Magnetic filter and electron deflection field have been included and a negative ion current density of j(H(-)) = 660 A/m(2) from the plasma grid (PG) is used as parameter for the neutral conversion. The driver is not yet included and a fixed ambipolar flux is emitted from the driver exit plane. Results show the strong asymmetry along the PG driven by the electron Hall (E × B and diamagnetic) drift perpendicular to the filter field. Such asymmetry creates an important dis-homogeneity in the electron current extracted from the different apertures. A steady state is not yet reached after 15 μs.

  6. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  7. Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm

    NASA Technical Reports Server (NTRS)

    Sidick, Erikin

    2009-01-01

    A Shack-Hartmann sensor (SHS) is an optical instrument consisting of a lenslet array and a camera. It is widely used for wavefront sensing in optical testing and astronomical adaptive optics. The camera is placed at the focal point of the lenslet array and points at a star or any other point source. The image captured is an array of spot images. When the wavefront error at the lenslet array changes, the position of each spot measurably shifts from its original position. Determining the shifts of the spot images from their reference points shows the extent of the wavefront error. An adaptive cross-correlation (ACC) algorithm has been developed to use scenes as well as point sources for wavefront error detection. Qualifying an extended scene image is often not an easy task due to changing conditions in scene content, illumination level, background, Poisson noise, read-out noise, dark current, sampling format, and field of view. The proposed new technique based on ACC algorithm analyzes the effects of these conditions on the performance of the ACC algorithm and determines the viability of an extended scene image. If it is viable, then it can be used for error correction; if it is not, the image fails and will not be further processed. By potentially testing for a wide variety of conditions, the algorithm s accuracy can be virtually guaranteed. In a typical application, the ACC algorithm finds image shifts of more than 500 Shack-Hartmann camera sub-images relative to a reference sub -image or cell when performing one wavefront sensing iteration. In the proposed new technique, a pair of test and reference cells is selected from the same frame, preferably from two well-separated locations. The test cell is shifted by an integer number of pixels, say, for example, from m= -5 to 5 along the x-direction by choosing a different area on the same sub-image, and the shifts are estimated using the ACC algorithm. The same is done in the y-direction. If the resulting shift estimate errors are less than a pre-determined threshold (e.g., 0.03 pixel), the image is accepted. Otherwise, it is rejected.

  8. Conceptual design of the DEMO neutral beam injectors: main developments and R&D achievements

    NASA Astrophysics Data System (ADS)

    Sonato, P.; Agostinetti, P.; Bolzonella, T.; Cismondi, F.; Fantz, U.; Fassina, A.; Franke, T.; Furno, I.; Hopf, C.; Jenkins, I.; Sartori, E.; Tran, M. Q.; Varje, J.; Vincenzi, P.; Zanotto, L.

    2017-05-01

    The objectives of the nuclear fusion power plant DEMO, to be built after the ITER experimental reactor, are usually understood to lie somewhere between those of ITER and a ‘first of a kind’ commercial plant. Hence, in DEMO the issues related to efficiency and RAMI (reliability, availability, maintainability and inspectability) are among the most important drivers for the design, as the cost of the electricity produced by this power plant will strongly depend on these aspects. In the framework of the EUROfusion Work Package Heating and Current Drive within the Power Plant Physics and Development activities, a conceptual design of the neutral beam injector (NBI) for the DEMO fusion reactor has been developed by Consorzio RFX in collaboration with other European research institutes. In order to improve efficiency and RAMI aspects, several innovative solutions have been introduced in comparison to the ITER NBI, mainly regarding the beam source, neutralizer and vacuum pumping systems.

  9. Fast sweeping method for the factored eikonal equation

    NASA Astrophysics Data System (ADS)

    Fomel, Sergey; Luo, Songting; Zhao, Hongkai

    2009-09-01

    We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.

  10. Multistep-Ahead Air Passengers Traffic Prediction with Hybrid ARIMA-SVMs Models

    PubMed Central

    Ming, Wei; Xiong, Tao

    2014-01-01

    The hybrid ARIMA-SVMs prediction models have been established recently, which take advantage of the unique strength of ARIMA and SVMs models in linear and nonlinear modeling, respectively. Built upon this hybrid ARIMA-SVMs models alike, this study goes further to extend them into the case of multistep-ahead prediction for air passengers traffic with the two most commonly used multistep-ahead prediction strategies, that is, iterated strategy and direct strategy. Additionally, the effectiveness of data preprocessing approaches, such as deseasonalization and detrending, is investigated and proofed along with the two strategies. Real data sets including four selected airlines' monthly series were collected to justify the effectiveness of the proposed approach. Empirical results demonstrate that the direct strategy performs better than iterative one in long term prediction case while iterative one performs better in the case of short term prediction. Furthermore, both deseasonalization and detrending can significantly improve the prediction accuracy for both strategies, indicating the necessity of data preprocessing. As such, this study contributes as a full reference to the planners from air transportation industries on how to tackle multistep-ahead prediction tasks in the implementation of either prediction strategy. PMID:24723814

  11. Assessment and selection of materials for ITER in-vessel components

    NASA Astrophysics Data System (ADS)

    Kalinin, G.; Barabash, V.; Cardella, A.; Dietz, J.; Ioki, K.; Matera, R.; Santoro, R. T.; Tivey, R.; ITER Home Teams

    2000-12-01

    During the international thermonuclear experimental reactor (ITER) engineering design activities (EDA) significant progress has been made in the selection of materials for the in-vessel components of the reactor. This progress is a result of the worldwide collaboration of material scientists and industries which focused their effort on the optimisation of material and component manufacturing and on the investigation of the most critical material properties. Austenitic stainless steels 316L(N)-IG and 316L, nickel-based alloys Inconel 718 and Inconel 625, Ti-6Al-4V alloy and two copper alloys, CuCrZr-IG and CuAl25-IG, have been proposed as reference structural materials, and ferritic steel 430, and austenitic steel 304B7 with the addition of boron have been selected for some specific parts of the ITER in-vessel components. Beryllium, tungsten and carbon fibre composites are considered as plasma facing armour materials. The data base on the properties of all these materials is critically assessed and briefly reviewed in this paper together with the justification of the material selection (e.g., effect of neutron irradiation on the mechanical properties of materials, effect of manufacturing cycle, etc.).

  12. Closed-loop control of artificial pancreatic Beta -cell in type 1 diabetes mellitus using model predictive iterative learning control.

    PubMed

    Wang, Youqing; Dassau, Eyal; Doyle, Francis J

    2010-02-01

    A novel combination of iterative learning control (ILC) and model predictive control (MPC), referred to here as model predictive iterative learning control (MPILC), is proposed for glycemic control in type 1 diabetes mellitus. MPILC exploits two key factors: frequent glucose readings made possible by continuous glucose monitoring technology; and the repetitive nature of glucose-meal-insulin dynamics with a 24-h cycle. The proposed algorithm can learn from an individual's lifestyle, allowing the control performance to be improved from day to day. After less than 10 days, the blood glucose concentrations can be kept within a range of 90-170 mg/dL. Generally, control performance under MPILC is better than that under MPC. The proposed methodology is robust to random variations in meal timings within +/-60 min or meal amounts within +/-75% of the nominal value, which validates MPILC's superior robustness compared to run-to-run control. Moreover, to further improve the algorithm's robustness, an automatic scheme for setpoint update that ensures safe convergence is proposed. Furthermore, the proposed method does not require user intervention; hence, the algorithm should be of particular interest for glycemic control in children and adolescents.

  13. Iterative reconstruction for x-ray computed tomography using prior-image induced nonlocal regularization.

    PubMed

    Zhang, Hua; Huang, Jing; Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2014-09-01

    Repeated X-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the X-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as "PWLS-PINL". Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive overrelaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection, and edge detail preservation.

  14. Iterative Reconstruction for X-Ray Computed Tomography using Prior-Image Induced Nonlocal Regularization

    PubMed Central

    Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2014-01-01

    Repeated x-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the x-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as “PWLS-PINL”. Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive over-relaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection and edge detail preservation. PMID:24235272

  15. PDQ-8 reference manual (LWBR development program)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pfiefer, C J; Spitz, C J

    1978-05-01

    The PDQ-8 program is designed to solve the neutron diffusion, depletion problem in one, two, or three dimensions on the CDC-6600 and CDC-7600 computers. The three dimensional spatial calculation may be either explicit or discontinuous trial function synthesis. Up to five lethargy groups are permitted. The fast group treatment may be simplified P(3), and the thermal neutrons may be represented by a single group or a pair of overlapping groups. Adjoint, fixed source, one iteration, additive fixed source, eigenvalue, and boundary value calculations may be performed. The HARMONY system is used for cross section variation and generalized depletion chain solutions.more » The depletion is a combination gross block depletion for all nuclides as well as a fine block depletion for a specified subset of the nuclides. The geometries available include rectangular, cylindrical, spherical, hexagonal, and a very general quadrilateral geometry with diagonal interfaces. All geometries allow variable mesh in all dimensions. Various control searches as well as temperature and xenon feedbacks are provided. The synthesis spatial solution time is dependent on the number of trial functions used and the number of gross blocks. The PDQ-8 program is used at Bettis on a production basis for solving diffusion--depletion problems. The report describes the various features of the program and then separately describes the input required to utilize these features.« less

  16. Intra-patient comparison of reduced-dose model-based iterative reconstruction with standard-dose adaptive statistical iterative reconstruction in the CT diagnosis and follow-up of urolithiasis.

    PubMed

    Tenant, Sean; Pang, Chun Lap; Dissanayake, Prageeth; Vardhanabhuti, Varut; Stuckey, Colin; Gutteridge, Catherine; Hyde, Christopher; Roobottom, Carl

    2017-10-01

    To evaluate the accuracy of reduced-dose CT scans reconstructed using a new generation of model-based iterative reconstruction (MBIR) in the imaging of urinary tract stone disease, compared with a standard-dose CT using 30% adaptive statistical iterative reconstruction. This single-institution prospective study recruited 125 patients presenting either with acute renal colic or for follow-up of known urinary tract stones. They underwent two immediately consecutive scans, one at standard dose settings and one at the lowest dose (highest noise index) the scanner would allow. The reduced-dose scans were reconstructed using both ASIR 30% and MBIR algorithms and reviewed independently by two radiologists. Objective and subjective image quality measures as well as diagnostic data were obtained. The reduced-dose MBIR scan was 100% concordant with the reference standard for the assessment of ureteric stones. It was extremely accurate at identifying calculi of 3 mm and above. The algorithm allowed a dose reduction of 58% without any loss of scan quality. A reduced-dose CT scan using MBIR is accurate in acute imaging for renal colic symptoms and for urolithiasis follow-up and allows a significant reduction in dose. • MBIR allows reduced CT dose with similar diagnostic accuracy • MBIR outperforms ASIR when used for the reconstruction of reduced-dose scans • MBIR can be used to accurately assess stones 3 mm and above.

  17. The Role of Combined ICRF and NBI Heating in JET Hybrid Plasmas in Quest for High D-T Fusion Yield

    NASA Astrophysics Data System (ADS)

    Mantsinen, Mervi; Challis, Clive; Frigione, Domenico; Graves, Jonathan; Hobirk, Joerg; Belonohy, Eva; Czarnecka, Agata; Eriksson, Jacob; Gallart, Dani; Goniche, Marc; Hellesen, Carl; Jacquet, Philippe; Joffrin, Emmanuel; King, Damian; Krawczyk, Natalia; Lennholm, Morten; Lerche, Ernesto; Pawelec, Ewa; Sips, George; Solano, Emilia R.; Tsalas, Maximos; Valisa, Marco

    2017-10-01

    Combined ICRF and NBI heating played a key role in achieving the world-record fusion yield in the first deuterium-tritium campaign at the JET tokamak in 1997. The current plans for JET include new experiments with deuterium-tritium (D-T) plasmas with more ITER-like conditions given the recently installed ITER-like wall (ILW). In the 2015-2016 campaigns, significant efforts have been devoted to the development of high-performance plasma scenarios compatible with ILW in preparation of the forthcoming D-T campaign. Good progress was made in both the inductive (baseline) and the hybrid scenario: a new record JET ILW fusion yield with a significantly extended duration of the high-performance phase was achieved. This paper reports on the progress with the hybrid scenario which is a candidate for ITER longpulse operation (˜1000 s) thanks to its improved normalized confinement, reduced plasma current and higher plasma beta with respect to the ITER reference baseline scenario. The combined NBI+ICRF power in the hybrid scenario was increased to 33 MW and the record fusion yield, averaged over 100 ms, to 2.9x1016 neutrons/s from the 2014 ILW fusion record of 2.3x1016 neutrons/s. Impurity control with ICRF waves was one of the key means for extending the duration of the high-performance phase. The main results are reviewed covering both key core and edge plasma issues.

  18. Efficient randomization of biological networks while preserving functional characterization of individual nodes.

    PubMed

    Iorio, Francesco; Bernardo-Faura, Marti; Gobbi, Andrea; Cokelaer, Thomas; Jurman, Giuseppe; Saez-Rodriguez, Julio

    2016-12-20

    Networks are popular and powerful tools to describe and model biological processes. Many computational methods have been developed to infer biological networks from literature, high-throughput experiments, and combinations of both. Additionally, a wide range of tools has been developed to map experimental data onto reference biological networks, in order to extract meaningful modules. Many of these methods assess results' significance against null distributions of randomized networks. However, these standard unconstrained randomizations do not preserve the functional characterization of the nodes in the reference networks (i.e. their degrees and connection signs), hence including potential biases in the assessment. Building on our previous work about rewiring bipartite networks, we propose a method for rewiring any type of unweighted networks. In particular we formally demonstrate that the problem of rewiring a signed and directed network preserving its functional connectivity (F-rewiring) reduces to the problem of rewiring two induced bipartite networks. Additionally, we reformulate the lower bound to the iterations' number of the switching-algorithm to make it suitable for the F-rewiring of networks of any size. Finally, we present BiRewire3, an open-source Bioconductor package enabling the F-rewiring of any type of unweighted network. We illustrate its application to a case study about the identification of modules from gene expression data mapped on protein interaction networks, and a second one focused on building logic models from more complex signed-directed reference signaling networks and phosphoproteomic data. BiRewire3 it is freely available at https://www.bioconductor.org/packages/BiRewire/ , and it should have a broad application as it allows an efficient and analytically derived statistical assessment of results from any network biology tool.

  19. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  20. Negative hydrogen ion production in a helicon plasma source

    NASA Astrophysics Data System (ADS)

    Santoso, J.; Manoharan, R.; O'Byrne, S.; Corr, C. S.

    2015-09-01

    In order to develop very high energy (>1 MeV) neutral beam injection systems for applications, such as plasma heating in fusion devices, it is necessary first to develop high throughput negative ion sources. For the ITER reference source, this will be realised using caesiated inductively coupled plasma devices, containing either hydrogen or deuterium discharges, operated with high rf input powers (up to 90 kW per driver). It has been suggested that due to their high power coupling efficiency, helicon devices may be able to reduce power requirements and potentially obviate the need for caesiation due to the high plasma densities achievable. Here, we present measurements of negative ion densities in a hydrogen discharge produced by a helicon device, with externally applied DC magnetic fields ranging from 0 to 8.5 mT at 5 and 10 mTorr fill pressures. These measurements were taken in the magnetised plasma interaction experiment at the Australian National University and were performed using the probe-based laser photodetachment technique, modified for the use in the afterglow of the plasma discharge. A peak in the electron density is observed at ˜3 mT and is correlated with changes in the rf power transfer efficiency. With increasing magnetic field, an increase in the negative ion fraction from 0.04 to 0.10 and negative ion densities from 8 × 1014 m-3 to 7 × 1015 m-3 is observed. It is also shown that the negative ion densities can be increased by a factor of 8 with the application of an external DC magnetic field.

  1. 41 CFR 302-17.13 - Source references.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false Source references. 302... references. The following references or publications have been used as source material for this part. (a...) Internal Revenue Service Publication 521, “Moving Expenses.” (c) Internal Revenue Service, Circular E...

  2. Iterative combination of national phenotype, genotype, pedigree, and foreign information

    USDA-ARS?s Scientific Manuscript database

    Single step methods can combine all sources of information into accurate rankings for animals with and without genotypes. Equations that require inverting the genomic relationship matrix G work well with limited numbers of animals, but equivalent models without inversion are needed as numbers increa...

  3. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.

  4. 2-D Fused Image Reconstruction approach for Microwave Tomography: a theoretical assessment using FDTD Model.

    PubMed

    Bindu, G; Semenov, S

    2013-01-01

    This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell's equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness.

  5. Burning plasma regime for Fussion-Fission Research Facility

    NASA Astrophysics Data System (ADS)

    Zakharov, Leonid E.

    2010-11-01

    The basic aspects of burning plasma regimes of Fusion-Fission Research Facility (FFRF, R/a=4/1 m/m, Ipl=5 MA, Btor=4-6 T, P^DT=50-100 MW, P^fission=80-4000 MW, 1 m thick blanket), which is suggested as the next step device for Chinese fusion program, are presented. The mission of FFRF is to advance magnetic fusion to the level of a stationary neutron source and to create a technical, scientific, and technology basis for the utilization of high-energy fusion neutrons for the needs of nuclear energy and technology. FFRF will rely as much as possible on ITER design. Thus, the magnetic system, especially TFC, will take advantage of ITER experience. TFC will use the same superconductor as ITER. The plasma regimes will represent an extension of the stationary plasma regimes on HT-7 and EAST tokamaks at ASIPP. Both inductive discharges and stationary non-inductive Lower Hybrid Current Drive (LHCD) will be possible. FFRF strongly relies on new, Lithium Wall Fusion (LiWF) plasma regimes, the development of which will be done on NSTX, HT-7, EAST in parallel with the design work. This regime will eliminate a number of uncertainties, still remaining unresolved in the ITER project. Well controlled, hours long inductive current drive operation at P^DT=50-100 MW is predicted.

  6. Memory-induced nonlinear dynamics of excitation in cardiac diseases.

    PubMed

    Landaw, Julian; Qu, Zhilin

    2018-04-01

    Excitable cells, such as cardiac myocytes, exhibit short-term memory, i.e., the state of the cell depends on its history of excitation. Memory can originate from slow recovery of membrane ion channels or from accumulation of intracellular ion concentrations, such as calcium ion or sodium ion concentration accumulation. Here we examine the effects of memory on excitation dynamics in cardiac myocytes under two diseased conditions, early repolarization and reduced repolarization reserve, each with memory from two different sources: slow recovery of a potassium ion channel and slow accumulation of the intracellular calcium ion concentration. We first carry out computer simulations of action potential models described by differential equations to demonstrate complex excitation dynamics, such as chaos. We then develop iterated map models that incorporate memory, which accurately capture the complex excitation dynamics and bifurcations of the action potential models. Finally, we carry out theoretical analyses of the iterated map models to reveal the underlying mechanisms of memory-induced nonlinear dynamics. Our study demonstrates that the memory effect can be unmasked or greatly exacerbated under certain diseased conditions, which promotes complex excitation dynamics, such as chaos. The iterated map models reveal that memory converts a monotonic iterated map function into a nonmonotonic one to promote the bifurcations leading to high periodicity and chaos.

  7. EC power management and NTM control in ITER

    NASA Astrophysics Data System (ADS)

    Poli, Francesca; Fredrickson, E.; Henderson, M.; Bertelli, N.; Farina, D.; Figini, L.; Nowak, S.; Poli, E.; Sauter, O.

    2016-10-01

    The suppression of Neoclassical Tearing Modes (NTMs) is an essential requirement for the achievement of the demonstration baseline in ITER. The Electron Cyclotron upper launcher is specifically designed to provide highly localized heating and current drive for NTM stabilization. In order to assess the power management for shared applications, we have performed time-dependent simulations for ITER scenarios covering operation from half to full field. The free-boundary TRANSP simulations evolve the magnetic equilibrium and the pressure profiles in response to the heating and current drive sources and are interfaced with a GRE for the evolution of size and frequency of the magnetic islands. Combined with a feedback control of the EC power and the steering angle, these simulations are used to model the plasma response to NTM control, accounting for the misalignment of the EC deposition with the resonant surfaces, uncertainties in the magnetic equilibrium reconstruction and in the magnetic island detection threshold. Simulations indicate that the threshold for detection of the island should not exceed 2-3cm, that pre-emptive control is a preferable option, and that for safe operation the power needed for NTM control should be reserved, rather than shared with other applications. Work supported by ITER under IO/RFQ/13/9550/JTR and by DOE under DE-AC02-09CH11466.

  8. Fusion Breeding for Sustainable, Mid Century, Carbon Free Power

    NASA Astrophysics Data System (ADS)

    Manheimer, Wallace

    2015-11-01

    If ITER achieves Q ~10, it is still very far from useful fusion. The fusion power, and the driver power will allow only a small amount of power to be delivered, <~50MW for an ITER scale tokamak. It is unlikely, considering ``conservative design rules'' that tokamaks can ever be economical pure fusion power producers. Considering the status of other magnetic fusion concepts, it is also very unlikely that any alternate concept will either. Laser fusion does not seem to be constrained by any conservative design rules, but considering the failure of NIF to achhieve ignition, at this point it has many more obstacles to overcome than magnetic fusion. One way out of this dilemma is to use an ITER size tokamak, or a NIF size laser, as a fuel breeder for searate nuclear reactors. Hence ITER and NIF become ends in themselves, instead of steps to who knows what DEMO decades later. Such a tokamak can easily live within the consrtaints of conservative design rules. This has led the author to propose ``The Energy Park'' a sustainable, carbon free, economical, and environmently viable power source without prolifertion risk. It is one fusion breeder fuels 5 conventional nuclear reactors, and one fast neutron reactor burns the actinide wastes.

  9. Memory-induced nonlinear dynamics of excitation in cardiac diseases

    NASA Astrophysics Data System (ADS)

    Landaw, Julian; Qu, Zhilin

    2018-04-01

    Excitable cells, such as cardiac myocytes, exhibit short-term memory, i.e., the state of the cell depends on its history of excitation. Memory can originate from slow recovery of membrane ion channels or from accumulation of intracellular ion concentrations, such as calcium ion or sodium ion concentration accumulation. Here we examine the effects of memory on excitation dynamics in cardiac myocytes under two diseased conditions, early repolarization and reduced repolarization reserve, each with memory from two different sources: slow recovery of a potassium ion channel and slow accumulation of the intracellular calcium ion concentration. We first carry out computer simulations of action potential models described by differential equations to demonstrate complex excitation dynamics, such as chaos. We then develop iterated map models that incorporate memory, which accurately capture the complex excitation dynamics and bifurcations of the action potential models. Finally, we carry out theoretical analyses of the iterated map models to reveal the underlying mechanisms of memory-induced nonlinear dynamics. Our study demonstrates that the memory effect can be unmasked or greatly exacerbated under certain diseased conditions, which promotes complex excitation dynamics, such as chaos. The iterated map models reveal that memory converts a monotonic iterated map function into a nonmonotonic one to promote the bifurcations leading to high periodicity and chaos.

  10. Some not such wonderful magnetic fusion facts; and their solution

    NASA Astrophysics Data System (ADS)

    Manheimer, Wallace

    2017-10-01

    The first not such wonderful fusion fact (NSWFF) is that if ITER is successful, it is nowhere near ready to develop into a DEMO. The design Q=10, along with electricity generating efficiency of 1/3 prevents this. Making it smaller and cheaper, increasing the gain by 3 or 4, and the wall loading by an order of magnitude is not a minor detail, it is not at all clear the success with ITER will lead to a similar, pure fusion DEMO. The second NSWFF is that tokamaks are unlikely to improve to the point where they can be effective fusion reactors because their performance is limited by conservative design rules. The third NSWFF is that developing large fusion devices like ITER takes an enormous amount of time and dollars, there are no second chances. The fourth NSWFF is that it is unlikely that alternative confinement configurations will succeed either, at least in this century; they are simply too far behind. There is only a single solution for fusion to become a sustainable, carbon free power source by midcentury or shortly thereafter. This is to develop ITER (assuming it is successful) into a fusion breeder. This work was not supported by any organization, private or public.

  11. Free-breathing Sparse Sampling Cine MR Imaging with Iterative Reconstruction for the Assessment of Left Ventricular Function and Mass at 3.0 T.

    PubMed

    Sudarski, Sonja; Henzler, Thomas; Haubenreisser, Holger; Dösch, Christina; Zenge, Michael O; Schmidt, Michaela; Nadar, Mariappan S; Borggrefe, Martin; Schoenberg, Stefan O; Papavassiliu, Theano

    2017-01-01

    Purpose To prospectively evaluate the accuracy of left ventricle (LV) analysis with a two-dimensional real-time cine true fast imaging with steady-state precession (trueFISP) magnetic resonance (MR) imaging sequence featuring sparse data sampling with iterative reconstruction (SSIR) performed with and without breath-hold (BH) commands at 3.0 T. Materials and Methods Ten control subjects (mean age, 35 years; range, 25-56 years) and 60 patients scheduled to undergo a routine cardiac examination that included LV analysis (mean age, 58 years; range, 20-86 years) underwent a fully sampled segmented multiple BH cine sequence (standard of reference) and a prototype undersampled SSIR sequence performed during a single BH and during free breathing (non-BH imaging). Quantitative analysis of LV function and mass was performed. Linear regression, Bland-Altman analysis, and paired t testing were performed. Results Similar to the results in control subjects, analysis of the 60 patients showed excellent correlation with the standard of reference for single-BH SSIR (r = 0.93-0.99) and non-BH SSIR (r = 0.92-0.98) for LV ejection fraction (EF), volume, and mass (P < .0001 for all). Irrespective of breath holding, LV end-diastolic mass was overestimated with SSIR (standard of reference: 163.9 g ± 58.9, single-BH SSIR: 178.5 g ± 62.0 [P < .0001], non-BH SSIR: 175.3 g ± 63.7 [P < .0001]); the other parameters were not significantly different (EF: 49.3% ± 11.9 with standard of reference, 48.8% ± 11.8 with single-BH SSIR, 48.8% ± 11 with non-BH SSIR; P = .03 and P = .12, respectively). Bland-Altman analysis showed similar measurement errors for single-BH SSIR and non-BH SSIR when compared with standard of reference measurements for EF, volume, and mass. Conclusion Assessment of LV function with SSIR at 3.0 T is noninferior to the standard of reference irrespective of BH commands. LV mass, however, is overestimated with SSIR. © RSNA, 2016 Online supplemental material is available for this article.

  12. SU-E-T-792: Validation of a Secondary TPS for IROC-H Recalculation of Anthropomorphic Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, J; Howell, R; Followill, D

    2015-06-15

    Purpose: To validate a secondary treatment planning system (sTPS) for use by the Imaging & Radiation Oncology Core-Houston (IROC-H). The TPS will recalculate phantom irradiations submitted by institutions to IROC-H and compare plan results of the institution to the sTPS. Methods: In-field dosimetric data was collected by IROC-H for numerous linacs at 6, 10, 15, and 18 MV. The data was aggregated and used to define reference linac classes; each class was then modeled in the sTPS (Mobius3D) by matching the in-field characteristics. Fields used to collect IROC-H data were recreated and recalculated using Mobius3D. The same dosimetric points weremore » measured in the recalculation and compared to the initial collection data. Additionally, a 6MV Monte Carlo beam configuration was used to compare penumbrae in the Mobius3D models. Finally, a handful of IROC-H head and neck phantoms were recalculated using Mobius3D. Results: Recalculation and quantification of differences between reference data and Mobius3D values resulted in a relative matching score of 12.45 (0 is a perfect match) for the default 6MV Mobius3D beam configuration. By adjusting beam configuration options, iterations resulted in scores of 8.45, 6.32, and 3.52, showing that customization could have a dramatic effect on beam configuration. After in-field optimization, penumbra was compared between Monte Carlo and Mobius3D for the reference fields. For open jaw fields, FWHM field widths and penumbra widths were different by <0.6 and <1mm respectively; for MLC open fields the penumbra widths were up to 1.5mm different. Phantom recalculations showed good agreement, having an average of 0.6% error per beam. Conclusion: A secondary TPS has been validated for simple irradiation geometries using reference data collected by IROC-H. The beam was customized to the reference data iteratively and resulted in a good match. This system can provide independent recalculation of phantom plans based on independent reference data.« less

  13. Manual of Documentation Practices Applicable to Defence-Aerospace Scientific and Technical Information. Volume 1. Section 1 - Acquisition and Sources. Section 2 - Descriptive Cataloguing. Section 3 - Abstracting and Subject Analysis

    DTIC Science & Technology

    1978-08-01

    weeding I I ORGANISATION & MANAGEMENT Aims and objectives, staffing, promotional activities, identifying u;ers 12 NETWORKS & EXTERNAL SOURCES OF...Acquisition Clerks with typing capability are required for meticulous recordkeeping. Typing capability of 50 words per minute and a working knowledge ...81 Adminhistration and Management Includes management planning and research. 64 Numerical Analysis Includes iteration, difference equations, and 82

  14. What Is the Reference? An Examination of Alternatives to the Reference Sources Used in IES TM-30-15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royer, Michael P.

    A study was undertaken to document the role of the reference illuminant in the IES TM-30-15 method for evaluating color rendition. TM-30-15 relies on a relative reference scheme; that is, the reference illuminant and test source always have the same correlated color temperature (CCT). The reference illuminant is a Planckian radiator, model of daylight, or combination of those two, depending on the exact CCT of the test source. Three alternative reference schemes were considered: 1) either using all Planckian radiators or all daylight models; 2) using only one of ten possible illuminants (Planckian, daylight, or equal energy), regardless of themore » CCT of the test source; 3) using an off-Planckian reference illuminant (i.e., a source with a negative Duv). No reference scheme is inherently superior to another, with differences in metric values largely a result of small differences in gamut shape of the reference alternatives. While using any of the alternative schemes is more reasonable in the TM-30-15 evaluation framework than it was with the CIE CRI framework, the differences still ultimately manifest only as changes in interpretation of the results. References are employed in color rendering measures to provide a familiar point of comparison, not to establish an ideal source.« less

  15. Robust High Data Rate MIMO Underwater Acoustic Communications

    DTIC Science & Technology

    2010-12-31

    algorithm is referred to as periodic CAN ( PeCAN ). Unlike most existing sequence construction methods which are algebraic and deterministic in nature, we...start the iteration of PeCAN from random phase initializations and then proceed to cyclically minimize the desired metric. In this way, through...by the foe and hence are especially useful as training sequences or as spreading sequences for UAC applications. We will use PeCAN sequences for

  16. Multidisciplinary Thermal Analysis of Hot Aerospace Structures

    DTIC Science & Technology

    2010-05-02

    Seidel iteration. Such a strategy simplifies explicit/implicit treatment , subcycling, load balancing, software modularity, and replacements as better... Stefan -Boltzmann constant , E is the emissivity of the surface, f is the form factor from the surface to the reference surface, Br is the temperature of...Stokes equations using Gauss- Seidel line Relaxation, Computers and Fluids, 17, pp.l35-150, 1989. [22] Hung C.M. and MacCormack R.W., Numerical

  17. A Novel Real-Time Reference Key Frame Scan Matching Method.

    PubMed

    Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu

    2017-05-07

    Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.

  18. Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2002-01-01

    This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.

  19. Iterative optimizing quantization method for reconstructing three-dimensional images from a limited number of views

    DOEpatents

    Lee, H.R.

    1997-11-18

    A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object. 5 figs.

  20. Online Pairwise Learning Algorithms.

    PubMed

    Ying, Yiming; Zhou, Ding-Xuan

    2016-04-01

    Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.

  1. Fusion of range camera and photogrammetry: a systematic procedure for improving 3-D models metric accuracy.

    PubMed

    Guidi, G; Beraldin, J A; Ciofi, S; Atzeni, C

    2003-01-01

    The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.

  2. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach

    PubMed Central

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Brock, Kristy K.; Daly, Michael J.; Chan, Harley; Irish, Jonathan C.; Siewerdsen, Jeffrey H.

    2011-01-01

    Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and∕or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5±2.8) mm compared to (3.5±3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance. PMID:21626913

  3. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali

    2011-04-15

    Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (''intensity''). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specificmore » intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5{+-}2.8) mm compared to (3.5{+-}3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.« less

  4. Ways to improve the efficiency and reliability of radio frequency driven negative ion sources for fusion.

    PubMed

    Kraus, W; Briefi, S; Fantz, U; Gutmann, P; Doerfler, J

    2014-02-01

    Large RF driven negative hydrogen ion sources are being developed at IPP Garching for the future neutral beam injection system of ITER. The overall power efficiency of these sources is low, because for the RF power supply self-excited generators are utilized and the plasma is generated in small cylindrical sources ("drivers") and expands into the source main volume. At IPP experiments to reduce the primary power and the RF power required for the plasma production are performed in two ways: The oscillator generator of the prototype source has been replaced by a transistorized RF transmitter and two alternative driver concepts, a spiral coil, in which the field is concentrated by ferrites, which omits the losses by plasma expansion and a helicon source are being tested.

  5. Calculating Remote Sensing Reflectance Uncertainties Using an Instrument Model Propagated Through Atmospheric Correction via Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Karakoylu, E.; Franz, B.

    2016-01-01

    First attempt at quantifying uncertainties in ocean remote sensing reflectance satellite measurements. Based on 1000 iterations of Monte Carlo. Data source is a SeaWiFS 4-day composite, 2003. The uncertainty is for remote sensing reflectance (Rrs) at 443 nm.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozhdestvenskyy, S.

    This work iterates on the first demonstration of a solid-state neutron multiplicity counting system developed at Lawrence Livermore National Laboratory by using commercial off-the-shelf detectors. The system was demonstrated to determine the mass of a californium-252 neutron source within 20% error requiring only one-hour measurement time with 20 cm 2 of active detector area.

  7. Using multi-date satellite imagery to monitor invasive grass species distribution in post-wildfire landscapes: An iterative, adaptable approach that employs open-source data and software

    USGS Publications Warehouse

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew; Chignell, Steve

    2017-01-01

    Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.

  8. A Parallel Fast Sweeping Method for the Eikonal Equation

    NASA Astrophysics Data System (ADS)

    Baker, B.

    2017-12-01

    Recently, there has been an exciting emergence of probabilistic methods for travel time tomography. Unlike gradient-based optimization strategies, probabilistic tomographic methods are resistant to becoming trapped in a local minimum and provide a much better quantification of parameter resolution than, say, appealing to ray density or performing checkerboard reconstruction tests. The benefits associated with random sampling methods however are only realized by successive computation of predicted travel times in, potentially, strongly heterogeneous media. To this end this abstract is concerned with expediting the solution of the Eikonal equation. While many Eikonal solvers use a fast marching method, the proposed solver will use the iterative fast sweeping method because the eight fixed sweep orderings in each iteration are natural targets for parallelization. To reduce the number of iterations and grid points required the high-accuracy finite difference stencil of Nobel et al., 2014 is implemented. A directed acyclic graph (DAG) is created with a priori knowledge of the sweep ordering and finite different stencil. By performing a topological sort of the DAG sets of independent nodes are identified as candidates for concurrent updating. Additionally, the proposed solver will also address scalability during earthquake relocation, a necessary step in local and regional earthquake tomography and a barrier to extending probabilistic methods from active source to passive source applications, by introducing an asynchronous parallel forward solve phase for all receivers in the network. Synthetic examples using the SEG over-thrust model will be presented.

  9. Using multi-date satellite imagery to monitor invasive grass species distribution in post-wildfire landscapes: An iterative, adaptable approach that employs open-source data and software

    NASA Astrophysics Data System (ADS)

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew W.; Chignell, Stephen M.

    2017-07-01

    Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.

  10. Tests of a two-color interferometer and polarimeter for ITER density measurements

    NASA Astrophysics Data System (ADS)

    Van Zeeland, M. A.; Carlstrom, T. N.; Finkenthal, D. K.; Boivin, R. L.; Colio, A.; Du, D.; Gattuso, A.; Glass, F.; Muscatello, C. M.; O'Neill, R.; Smiley, M.; Vasquez, J.; Watkins, M.; Brower, D. L.; Chen, J.; Ding, W. X.; Johnson, D.; Mauzey, P.; Perry, M.; Watts, C.; Wood, R.

    2017-12-01

    A full-scale 120 m path length ITER toroidal interferometer and polarimeter (TIP) prototype, including an active feedback alignment system, has been constructed and undergone initial testing at General Atomics. In the TIP prototype, two-color interferometry is carried out at 10.59 μm and 5.22 μm using a CO2 and quantum cascade laser (QCL) respectively while a separate polarimetry measurement of the plasma induced Faraday effect is made at 10.59 μm. The polarimeter system uses co-linear right and left-hand circularly polarized beams upshifted by 40 and 44 MHz acousto-optic cells respectively, to generate the necessary beat signal for heterodyne phase detection, while interferometry measurements are carried out at both 40 MHz and 44 MHz for the CO2 laser and 40 MHz for the QCL. The high-resolution phase information is obtained using an all-digital FPGA based phase demodulation scheme and precision clock source. The TIP prototype is equipped with a piezo tip/tilt stage active feedback alignment system responsible for minimizing noise in the measurement and keeping the TIP diagnostic aligned indefinitely on its 120 m beam path including as the ITER vessel is brought from ambient to operating temperatures. The prototype beam path incorporates translation stages to simulate ITER motion through a bake cycle as well as other sources of motion or misalignment. Even in the presence of significant motion, the TIP prototype is able to meet ITER’s density measurement requirements over 1000 s shot durations with demonstrated phase resolution of 0.06° and 1.5° for the polarimeter and vibration compensated interferometer respectively. TIP vibration compensated interferometer measurements of a plasma have also been made in a pulsed radio frequency device and show a line-integrated density resolution of δ {nL}=3.5× {10}17 m-2.

  11. Low-dose 4D cardiac imaging in small animals using dual source micro-CT

    NASA Astrophysics Data System (ADS)

    Holbrook, M.; Clark, D. P.; Badea, C. T.

    2018-01-01

    Micro-CT is widely used in preclinical studies, generating substantial interest in extending its capabilities in functional imaging applications such as blood perfusion and cardiac function. However, imaging cardiac structure and function in mice is challenging due to their small size and rapid heart rate. To overcome these challenges, we propose and compare improvements on two strategies for cardiac gating in dual-source, preclinical micro-CT: fast prospective gating (PG) and uncorrelated retrospective gating (RG). These sampling strategies combined with a sophisticated iterative image reconstruction algorithm provide faster acquisitions and high image quality in low-dose 4D (i.e. 3D  +  Time) cardiac micro-CT. Fast PG is performed under continuous subject rotation which results in interleaved projection angles between cardiac phases. Thus, fast PG provides a well-sampled temporal average image for use as a prior in iterative reconstruction. Uncorrelated RG incorporates random delays during sampling to prevent correlations between heart rate and sampling rate. We have performed both simulations and animal studies to validate these new sampling protocols. Sampling times for 1000 projections using fast PG and RG were 2 and 3 min, respectively, and the total dose was 170 mGy each. Reconstructions were performed using a 4D iterative reconstruction technique based on the split Bregman method. To examine undersampling robustness, subsets of 500 and 250 projections were also used for reconstruction. Both sampling strategies in conjunction with our iterative reconstruction method are capable of resolving cardiac phases and provide high image quality. In general, for equal numbers of projections, fast PG shows fewer errors than RG and is more robust to undersampling. Our results indicate that only 1000-projection based reconstruction with fast PG satisfies a 5% error criterion in left ventricular volume estimation. These methods promise low-dose imaging with a wide range of preclinical applications in cardiac imaging.

  12. Activation characteristics of candidate structural materials for a near-term Indian fusion reactor and the impact of their impurities on design considerations

    NASA Astrophysics Data System (ADS)

    H, L. SWAMI; C, DANANI; A, K. SHAW

    2018-06-01

    Activation analyses play a vital role in nuclear reactor design. Activation analyses, along with nuclear analyses, provide important information for nuclear safety and maintenance strategies. Activation analyses also help in the selection of materials for a nuclear reactor, by providing the radioactivity and dose rate levels after irradiation. This information is important to help define maintenance activity for different parts of the reactor, and to plan decommissioning and radioactive waste disposal strategies. The study of activation analyses of candidate structural materials for near-term fusion reactors or ITER is equally essential, due to the presence of a high-energy neutron environment which makes decisive demands on material selection. This study comprises two parts; in the first part the activation characteristics, in a fusion radiation environment, of several elements which are widely present in structural materials, are studied. It reveals that the presence of a few specific elements in a material can diminish its feasibility for use in the nuclear environment. The second part of the study concentrates on activation analyses of candidate structural materials for near-term fusion reactors and their comparison in fusion radiation conditions. The structural materials selected for this study, i.e. India-specific Reduced Activation Ferritic‑Martensitic steel (IN-RAFMS), P91-grade steel, stainless steel 316LN ITER-grade (SS-316LN-IG), stainless steel 316L and stainless steel 304, are candidates for use in ITER either in vessel components or test blanket systems. Tungsten is also included in this study because of its use for ITER plasma-facing components. The study is carried out using the reference parameters of the ITER fusion reactor. The activation characteristics of the materials are assessed considering the irradiation at an ITER equatorial port. The presence of elements like Nb, Mo, Co and Ta in a structural material enhance the activity level as well as the dose level, which has an impact on design considerations. IN-RAFMS was shown to be a more effective low-activation material than SS-316LN-IG.

  13. Distributed Optimal Power Flow of AC/DC Interconnected Power Grid Using Synchronous ADMM

    NASA Astrophysics Data System (ADS)

    Liang, Zijun; Lin, Shunjiang; Liu, Mingbo

    2017-05-01

    Distributed optimal power flow (OPF) is of great importance and challenge to AC/DC interconnected power grid with different dispatching centres, considering the security and privacy of information transmission. In this paper, a fully distributed algorithm for OPF problem of AC/DC interconnected power grid called synchronous ADMM is proposed, and it requires no form of central controller. The algorithm is based on the fundamental alternating direction multiplier method (ADMM), by using the average value of boundary variables of adjacent regions obtained from current iteration as the reference values of both regions for next iteration, which realizes the parallel computation among different regions. The algorithm is tested with the IEEE 11-bus AC/DC interconnected power grid, and by comparing the results with centralized algorithm, we find it nearly no differences, and its correctness and effectiveness can be validated.

  14. Iterative non-sequential protein structural alignment.

    PubMed

    Salem, Saeed; Zaki, Mohammed J; Bystroff, Christopher

    2009-06-01

    Structural similarity between proteins gives us insights into their evolutionary relationships when there is low sequence similarity. In this paper, we present a novel approach called SNAP for non-sequential pair-wise structural alignment. Starting from an initial alignment, our approach iterates over a two-step process consisting of a superposition step and an alignment step, until convergence. We propose a novel greedy algorithm to construct both sequential and non-sequential alignments. The quality of SNAP alignments were assessed by comparing against the manually curated reference alignments in the challenging SISY and RIPC datasets. Moreover, when applied to a dataset of 4410 protein pairs selected from the CATH database, SNAP produced longer alignments with lower rmsd than several state-of-the-art alignment methods. Classification of folds using SNAP alignments was both highly sensitive and highly selective. The SNAP software along with the datasets are available online at http://www.cs.rpi.edu/~zaki/software/SNAP.

  15. Enhancement of runaway production by resonant magnetic perturbation on J-TEXT

    NASA Astrophysics Data System (ADS)

    Chen, Z. Y.; Huang, D. W.; Izzo, V. A.; Tong, R. H.; Jiang, Z. H.; Hu, Q. M.; Wei, Y. N.; Yan, W.; Rao, B.; Wang, S. Y.; Ma, T. K.; Li, S. C.; Yang, Z. J.; Ding, D. H.; Wang, Z. J.; Zhang, M.; Zhuang, G.; Pan, Y.; J-TEXT Team

    2016-07-01

    The suppression of runaways following disruptions is key for the safe operation of ITER. The massive gas injection (MGI) has been developed to mitigate heat loads, electromagnetic forces and runaway electrons (REs) during disruptions. However, MGI may not completely prevent the generation of REs during disruptions on ITER. Resonant magnetic perturbation (RMP) has been applied to suppress runaway generation during disruptions on several machines. It was found that strong RMP results in the enhancement of runaway production instead of runaway suppression on J-TEXT. The runaway current was about 50% pre-disruption plasma current in argon induced reference disruptions. With moderate RMP, the runway current decreased to below 30% pre-disruption plasma current. The runaway current plateaus reach 80% of the pre-disruptive current when strong RMP was applied. Strong RMP may induce large size magnetic islands that could confine more runaway seed during disruptions. This has important implications for runaway suppression on large machines.

  16. Marketing: A Bibliography of Marketing Reference Sources. The University of Rhode Island University Library.

    ERIC Educational Resources Information Center

    Masten, Lisa

    This annotated bibliography provides a selected list of marketing reference sources for undergraduate and graduate business students interested in marketing and related topics. All sources listed are available in the Reference Department at the University Library at the University of Rhode Island Kingston campus. Most sources, with the exception…

  17. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin, E-mail: nzcho@kaist.ac.kr

    2015-12-31

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problemmore » are presented.« less

  18. Eight channel transmit array volume coil using on-coil radiofrequency current sources

    PubMed Central

    Kurpad, Krishna N.; Boskamp, Eddy B.

    2014-01-01

    Background At imaging frequencies associated with high-field MRI, the combined effects of increased load-coil interaction and shortened wavelength results in degradation of circular polarization and B1 field homogeneity in the imaging volume. Radio frequency (RF) shimming is known to mitigate the problem of B1 field inhomogeneity. Transmit arrays with well decoupled transmitting elements enable accurate B1 field pattern control using simple, non-iterative algorithms. Methods An eight channel transmit array was constructed. Each channel consisted of a transmitting element driven by a dedicated on-coil RF current source. The coil current distributions of characteristic transverse electromagnetic (TEM) coil resonant modes were non-iteratively set up on each transmitting element and 3T MRI images of a mineral oil phantom were obtained. Results B1 field patterns of several linear and quadrature TEM coil resonant modes that typically occur at different resonant frequencies were replicated at 128 MHz without having to retune the transmit array. The generated B1 field patterns agreed well with simulation in most cases. Conclusions Independent control of current amplitude and phase on each transmitting element was demonstrated. The transmit array with on-coil RF current sources enables B1 field shimming in a simple and predictable manner. PMID:24834418

  19. Prototyping Control and Data Acquisition for the ITER Neutral Beam Test Facility

    NASA Astrophysics Data System (ADS)

    Luchetta, Adriano; Manduchi, Gabriele; Taliercio, Cesare; Soppelsa, Anton; Paolucci, Francesco; Sartori, Filippo; Barbato, Paolo; Breda, Mauro; Capobianco, Roberto; Molon, Federico; Moressa, Modesto; Polato, Sandro; Simionato, Paola; Zampiva, Enrico

    2013-10-01

    The ITER Neutral Beam Test Facility will be the project's R&D facility for heating neutral beam injectors (HNB) for fusion research operating with H/D negative ions. Its mission is to develop technology to build the HNB prototype injector meeting the stringent HNB requirements (16.5 MW injection power, -1 MeV acceleration energy, 40 A ion current and one hour continuous operation). Two test-beds will be built in sequence in the facility: first SPIDER, the ion source test-bed, to optimize the negative ion source performance, second MITICA, the actual prototype injector, to optimize ion beam acceleration and neutralization. The SPIDER control and data acquisition system is under design. To validate the main architectural choices, a system prototype has been assembled and performance tests have been executed to assess the prototype's capability to meet the control and data acquisition system requirements. The prototype is based on open-source software frameworks running under Linux. EPICS is the slow control engine, MDSplus is the data handler and MARTe is the fast control manager. The prototype addresses low and high-frequency data acquisition, 10 kS/s and 10 MS/s respectively, camera image acquisition, data archiving, data streaming, data retrieval and visualization, real time fast control with 100 μs control cycle and supervisory control.

  20. ISS Double-Gimbaled CMG Subsystem Simulation Using the Agile Development Method

    NASA Technical Reports Server (NTRS)

    Inampudi, Ravi

    2016-01-01

    This paper presents an evolutionary approach in simulating a cluster of 4 Control Moment Gyros (CMG) on the International Space Station (ISS) using a common sense approach (the agile development method) for concurrent mathematical modeling and simulation of the CMG subsystem. This simulation is part of Training systems for the 21st Century simulator which will provide training for crew members, instructors, and flight controllers. The basic idea of how the CMGs on the space station are used for its non-propulsive attitude control is briefly explained to set up the context for simulating a CMG subsystem. Next different reference frames and the detailed equations of motion (EOM) for multiple double-gimbal variable-speed control moment gyroscopes (DGVs) are presented. Fixing some of the terms in the EOM becomes the special case EOM for ISS's double-gimbaled fixed speed CMGs. CMG simulation development using the agile development method is presented in which customer's requirements and solutions evolve through iterative analysis, design, coding, unit testing and acceptance testing. At the end of the iteration a set of features implemented in that iteration are demonstrated to the flight controllers thus creating a short feedback loop and helping in creating adaptive development cycles. The unified modeling language (UML) tool is used in illustrating the user stories, class designs and sequence diagrams. This incremental development approach of mathematical modeling and simulating the CMG subsystem involved the development team and the customer early on, thus improving the quality of the working CMG system in each iteration and helping the team to accurately predict the cost, schedule and delivery of the software.

  1. An iterative approach for compound detection in an unknown pharmaceutical drug product: Application on Raman microscopy.

    PubMed

    Boiret, Mathieu; Gorretta, Nathalie; Ginot, Yves-Michel; Roger, Jean-Michel

    2016-02-20

    Raman chemical imaging provides both spectral and spatial information on a pharmaceutical drug product. Even if the main objective of chemical imaging is to obtain distribution maps of each formulation compound, identification of pure signals in a mixture dataset remains of huge interest. In this work, an iterative approach is proposed to identify the compounds in a pharmaceutical drug product, assuming that the chemical composition of the product is not known by the analyst and that a low dose compound can be present in the studied medicine. The proposed approach uses a spectral library, spectral distances and orthogonal projections to iteratively detect pure compounds of a tablet. Since the proposed method is not based on variance decomposition, it should be well adapted for a drug product which contains a low dose product, interpreted as a compound located in few pixels and with low spectral contributions. The method is tested on a tablet specifically manufactured for this study with one active pharmaceutical ingredient and five excipients. A spectral library, constituted of 24 pure pharmaceutical compounds, is used as a reference spectral database. Pure spectra of active and excipients, including a modification of the crystalline form and a low dose compound, are iteratively detected. Once the pure spectra are identified, multivariate curve resolution-alternating least squares process is performed on the data to provide distribution maps of each compound in the studied sample. Distributions of the two crystalline forms of active and the five excipients were in accordance with the theoretical formulation. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Acceleration of linear stationary iterative processes in multiprocessor computers. II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romm, Ya.E.

    1982-05-01

    For pt.I, see Kibernetika, vol.18, no.1, p.47 (1982). For pt.I, see Cybernetics, vol.18, no.1, p.54 (1982). Considers a reduced system of linear algebraic equations x=ax+b, where a=(a/sub ij/) is a real n*n matrix; b is a real vector with common euclidean norm >>>. It is supposed that the existence and uniqueness of solution det (0-a) not equal to e is given, where e is a unit matrix. The linear iterative process converging to x x/sup (k+1)/=fx/sup (k)/, k=0, 1, 2, ..., where the operator f translates r/sup n/ into r/sup n/. In considering implementation of the iterative process (ip) inmore » a multiprocessor system, it is assumed that the number of processors is constant, and are various values of the latter investigated; it is assumed in addition, that the processors perform elementary binary arithmetic operations of addition and multiestimates only include the time of execution of arithmetic operations. With any paralleling of individual iteration, the execution time of the ip is proportional to the number of sequential steps k+1. The author sets the task of reducing the number of sequential steps in the ip so as to execute it in a time proportional to a value smaller than k+1. He also sets the goal of formulating a method of accelerated bit serial-parallel execution of each successive step of the ip, with, in the modification sought, a reduced number of steps in a time comparable to the operation time of logical elements. 6 references.« less

  3. Spatial and contrast resolution of ultralow dose dentomaxillofacial CT imaging using iterative reconstruction technology

    PubMed Central

    Bischel, Alexander; Stratis, Andreas; Bosmans, Hilde; Jacobs, Reinhilde; Gassner, Eva-Maria; Puelacher, Wolfgang; Pauwels, Ruben

    2017-01-01

    Objectives: The objective of this study was to determine how iterative reconstruction technology (IRT) influences contrast and spatial resolution in ultralow-dose dentomaxillofacial CT imaging. Methods: A polymethyl methacrylate phantom with various inserts was scanned using a reference protocol (RP) at CT dose index volume 36.56 mGy, a sinus protocol at 18.28 mGy and ultralow-dose protocols (LD) at 4.17 mGy, 2.36 mGy, 0.99 mGy and 0.53 mGy. All data sets were reconstructed using filtered back projection (FBP) and the following IRTs: adaptive statistical iterative reconstructions (ASIRs) (ASIR-50, ASIR-100) and model-based iterative reconstruction (MBIR). Inserts containing line-pair patterns and contrast detail patterns for three different materials were scored by three observers. Observer agreement was analyzed using Cohen's kappa and difference in performance between the protocols and reconstruction was analyzed with Dunn's test at α = 0.05. Results: Interobserver agreement was acceptable with a mean kappa value of 0.59. Compared with the RP using FBP, similar scores were achieved at 2.36 mGy using MBIR. MIBR reconstructions showed the highest noise suppression as well as good contrast even at the lowest doses. Overall, ASIR reconstructions did not outperform FBP. Conclusions: LD and MBIR at a dose reduction of >90% may show no significant differences in spatial and contrast resolution compared with an RP and FBP. Ultralow-dose CT and IRT should be further explored in clinical studies. PMID:28059562

  4. Computer-Aided Engineering of Semiconductor Integrated Circuits

    DTIC Science & Technology

    1979-07-01

    equation using a five point finite difference approximation. Section 4.3.6 describes the numerical techniques and iterative algorithms which are used...neighbor points. This is generally referred to as a five point finite difference scheme on a rectangular grid, as described below. The finite difference ...problems in steady state have been analyzed by the finite difference method [4. 16 ] [4.17 3 or finite element method [4. 18 3, [4. 19 3 as reported last

  5. Department of Defense Costing References Web. Phase 1. Establishing the Foundation.

    DTIC Science & Technology

    1997-03-01

    a functional economic analysis under one set of constraints and having to repeat the entire process for the MAISRC. Recommendations for automated...MAISRC s acquisition oversight process . The cost and cycle time for each iteration can be in the order of $300,000 and 6 months, respectively...Institute resources were expected to become available at the conclusion of another BPR project. The contents list for the first Business Process

  6. Active Control of Radiated Sound with Integrated Piezoelectric Composite Structures. Volume 3: Appendices (Concl.)

    DTIC Science & Technology

    1998-11-06

    after many iterations of analysis , development, construction and testing was found to provide amplification ratios of around 250:1 and generate...IEEE International Symposium on Application of Ferroelectrics 2, 767-770 (1996). 11. "A Comparative Analysis of Piezoelectric Bending Mode Actuators...Active 95, 359-368, Newport Beach, CA(1995) 21. "Multiple Reference Feedforward Active Noise Control. Part I. Analysis and Simulation of Behavior," Y

  7. A hybrid multigroup neutron-pattern model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogosbekyan, L.R.; Lysov, D.A.

    In this paper, we use the general approach to construct a multigroup hybrid model for the neutron pattern. The equations are given together with a reasonably economic and simple iterative method of solving them. The algorithm can be used to calculate the pattern and the functionals as well as to correct the constants from the experimental data and to adapt the support over the constants to the engineering programs by reference to precision ones.

  8. Modeling of surface-dominated plasmas: from electric thruster to negative ion source.

    PubMed

    Taccogna, F; Schneider, R; Longo, S; Capitelli, M

    2008-02-01

    This contribution shows two important applications of the particle-in-cell/monte Carlo technique on ion sources: modeling of the Hall thruster SPT-100 for space propulsion and of the rf negative ion source for ITER neutral beam injection. In the first case translational degrees of freedom are involved, while in the second case inner degrees of freedom (vibrational levels) are excited. Computational results show how in both cases, plasma-wall and gas-wall interactions play a dominant role. These are secondary electron emission from the lateral ceramic wall of SPT-100 and electron capture from caesiated surfaces by positive ions and atoms in the rf negative ion source.

  9. The optimal modified variational iteration method for the Lane-Emden equations with Neumann and Robin boundary conditions

    NASA Astrophysics Data System (ADS)

    Singh, Randhir; Das, Nilima; Kumar, Jitendra

    2017-06-01

    An effective analytical technique is proposed for the solution of the Lane-Emden equations. The proposed technique is based on the variational iteration method (VIM) and the convergence control parameter h . In order to avoid solving a sequence of nonlinear algebraic or complicated integrals for the derivation of unknown constant, the boundary conditions are used before designing the recursive scheme for solution. The series solutions are found which converges rapidly to the exact solution. Convergence analysis and error bounds are discussed. Accuracy, applicability of the method is examined by solving three singular problems: i) nonlinear Poisson-Boltzmann equation, ii) distribution of heat sources in the human head, iii) second-kind Lane-Emden equation.

  10. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.

    PubMed

    Baranwal, Vipul K; Pandey, Ram K; Singh, Om P

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  11. PROGRAM VSAERO: A computer program for calculating the non-linear aerodynamic characteristics of arbitrary configurations: User's manual

    NASA Technical Reports Server (NTRS)

    Maskew, B.

    1982-01-01

    VSAERO is a computer program used to predict the nonlinear aerodynamic characteristics of arbitrary three-dimensional configurations in subsonic flow. Nonlinear effects of vortex separation and vortex surface interaction are treated in an iterative wake-shape calculation procedure, while the effects of viscosity are treated in an iterative loop coupling potential-flow and integral boundary-layer calculations. The program employs a surface singularity panel method using quadrilateral panels on which doublet and source singularities are distributed in a piecewise constant form. This user's manual provides a brief overview of the mathematical model, instructions for configuration modeling and a description of the input and output data. A listing of a sample case is included.

  12. Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry.

    PubMed

    Bedggood, Phillip; Metha, Andrew

    2010-01-01

    Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

  13. Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry

    NASA Astrophysics Data System (ADS)

    Bedggood, Phillip; Metha, Andrew

    2010-11-01

    Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

  14. Long-range multi-carrier acoustic communications in shallow water based on iterative sparse channel estimation.

    PubMed

    Kang, Taehyuk; Song, H C; Hodgkiss, W S; Soo Kim, Jea

    2010-12-01

    Long-range orthogonal frequency division multiplexing (OFDM) acoustic communications is demonstrated using data from the Kauai Acomms MURI 2008 (KAM08) experiment carried out in about 106 m deep shallow water west of Kauai, HI, in June 2008. The source bandwidth was 8 kHz (12-20 kHz), and the data were received by a 16-element vertical array at a distance of 8 km. Iterative sparse channel estimation is applied in conjunction with low-density parity-check decoding. In addition, the impact of diversity combining in a highly inhomogeneous underwater environment is investigated. Error-free transmission using 16-quadtrative amplitude modulation is achieved at a data rate of 10 kb/s.

  15. 2-D Fused Image Reconstruction approach for Microwave Tomography: a theoretical assessment using FDTD Model

    PubMed Central

    Bindu, G.; Semenov, S.

    2013-01-01

    This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell’s equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness. PMID:24058889

  16. Novel fusion for hybrid optical/microcomputed tomography imaging based on natural light surface reconstruction and iterated closest point

    NASA Astrophysics Data System (ADS)

    Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo

    2014-02-01

    In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.

  17. Rupture process of the 2013 Okhotsk deep mega earthquake from iterative backprojection and compress sensing methods

    NASA Astrophysics Data System (ADS)

    Qin, W.; Yin, J.; Yao, H.

    2013-12-01

    On May 24th 2013 a Mw 8.3 normal faulting earthquake occurred at a depth of approximately 600 km beneath the sea of Okhotsk, Russia. It is a rare mega earthquake that ever occurred at such a great depth. We use the time-domain iterative backprojection (IBP) method [1] and also the frequency-domain compressive sensing (CS) technique[2] to investigate the rupture process and energy radiation of this mega earthquake. We currently use the teleseismic P-wave data from about 350 stations of USArray. IBP is an improved method of the traditional backprojection method, which more accurately locates subevents (energy burst) during earthquake rupture and determines the rupture speeds. The total rupture duration of this earthquake is about 35 s with a nearly N-S rupture direction. We find that the rupture is bilateral in the beginning 15 seconds with slow rupture speeds: about 2.5km/s for the northward rupture and about 2 km/s for the southward rupture. After that, the northward rupture stopped while the rupture towards south continued. The average southward rupture speed between 20-35 s is approximately 5 km/s, lower than the shear wave speed (about 5.5 km/s) at the hypocenter depth. The total rupture length is about 140km, in a nearly N-S direction, with a southward rupture length about 100 km and a northward rupture length about 40 km. We also use the CS method, a sparse source inversion technique, to study the frequency-dependent seismic radiation of this mega earthquake. We observe clear along-strike frequency dependence of the spatial and temporal distribution of seismic radiation and rupture process. The results from both methods are generally similar. In the next step, we'll use data from dense arrays in southwest China and also global stations for further analysis in order to more comprehensively study the rupture process of this deep mega earthquake. Reference [1] Yao H, Shearer P M, Gerstoft P. Subevent location and rupture imaging using iterative backprojection for the 2011 Tohoku Mw 9.0 earthquake. Geophysical Journal International, 2012, 190(2): 1152-1168. [2]Yao H, Gerstoft P, Shearer P M, et al. Compressive sensing of the Tohoku-Oki Mw 9.0 earthquake: Frequency-dependent rupture modes. Geophysical Research Letters, 2011, 38(20).

  18. Reference-free Shack-Hartmann wavefront sensor.

    PubMed

    Zhao, Liping; Guo, Wenjiang; Li, Xiang; Chen, I-Ming

    2011-08-01

    The traditional Shack-Hartmann wavefront sensing (SHWS) system measures the wavefront slope by calculating the centroid shift between the sample and a reference piece, and then the wavefront is reconstructed by a suitable iterative reconstruction method. Because of the necessity of a reference, many issues are brought up, which limit the system in most applications. This Letter proposes a reference-free wavefront sensing (RFWS) methodology, and an RFWS system is built up where wavefront slope changes are measured by introducing a lateral disturbance to the sampling aperture. By using Southwell reconstruction two times to process the measured data, the form of the wavefront at the sampling plane can be well reconstructed. A theoretical simulation platform of RFWS is established, and various surface forms are investigated. Practical measurements with two measurement systems-SHWS and our RFWS-are conducted, analyzed, and compared. All the simulation and measurement results prove and demonstrate the correctness and effectiveness of the method. © 2011 Optical Society of America

  19. Model-Free control performance improvement using virtual reference feedback tuning and reinforcement Q-learning

    NASA Astrophysics Data System (ADS)

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian

    2017-04-01

    This paper proposes the combination of two model-free controller tuning techniques, namely linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning, referred to as a new mixed VRFT-Q learning approach. VRFT is first used to find stabilising feedback controller using input-output experimental data from the process in a model reference tracking setting. Reinforcement Q-learning is next applied in the same setting using input-state experimental data collected under perturbed VRFT to ensure good exploration. The Q-learning controller learned with a batch fitted Q iteration algorithm uses two neural networks, one for the Q-function estimator and one for the controller, respectively. The VRFT-Q learning approach is validated on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS). Extensive simulations for the two independent control channels of the MIMO AS show that the Q-learning controllers clearly improve performance over the VRFT controllers.

  20. Effective Algorithm for Detection and Correction of the Wave Reconstruction Errors Caused by the Tilt of Reference Wave in Phase-shifting Interferometry

    NASA Astrophysics Data System (ADS)

    Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying

    2010-04-01

    In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.

  1. Radiant Temperature Nulling Radiometer

    NASA Technical Reports Server (NTRS)

    Ryan, Robert (Inventor)

    2003-01-01

    A self-calibrating nulling radiometer for non-contact temperature measurement of an object, such as a body of water, employs a black body source as a temperature reference, an optomechanical mechanism, e.g., a chopper, to switch back and forth between measuring the temperature of the black body source and that of a test source, and an infrared detection technique. The radiometer functions by measuring radiance of both the test and the reference black body sources; adjusting the temperature of the reference black body so that its radiance is equivalent to the test source; and, measuring the temperature of the reference black body at this point using a precision contact-type temperature sensor, to determine the radiative temperature of the test source. The radiation from both sources is detected by an infrared detector that converts the detected radiation to an electrical signal that is fed with a chopper reference signal to an error signal generator, such as a synchronous detector, that creates a precision rectified signal that is approximately proportional to the difference between the temperature of the reference black body and that of the test infrared source. This error signal is then used in a feedback loop to adjust the reference black body temperature until it equals that of the test source, at which point the error signal is nulled to zero. The chopper mechanism operates at one or more Hertz allowing minimization of l/f noise. It also provides pure chopping between the black body and the test source and allows continuous measurements.

  2. Simulation-based optimization of lattice support structures for offshore wind energy converters with the simultaneous perturbation algorithm

    NASA Astrophysics Data System (ADS)

    Molde, H.; Zwick, D.; Muskulus, M.

    2014-12-01

    Support structures for offshore wind turbines are contributing a large part to the total project cost, and a cost saving of a few percent would have considerable impact. At present support structures are designed with simplified methods, e.g., spreadsheet analysis, before more detailed load calculations are performed. Due to the large number of loadcases only a few semimanual design iterations are typically executed. Computer-assisted optimization algorithms could help to further explore design limits and avoid unnecessary conservatism. In this study the simultaneous perturbation stochastic approximation method developed by Spall in the 1990s was assessed with respect to its suitability for support structure optimization. The method depends on a few parameters and an objective function that need to be chosen carefully. In each iteration the structure is evaluated by time-domain analyses, and joint fatigue lifetimes and ultimate strength utilization are computed from stress concentration factors. A pseudo-gradient is determined from only two analysis runs and the design is adjusted in the direction that improves it the most. The algorithm is able to generate considerably improved designs, compared to other methods, in a few hundred iterations, which is demonstrated for the NOWITECH 10 MW reference turbine.

  3. [Comparison of different methods in dealing with HIV viral load data with diversified missing value mechanism on HIV positive MSM].

    PubMed

    Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y

    2017-11-10

    Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.

  4. Agile Task Tracking Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duke, Roger T.; Crump, Thomas Vu

    The work was created to provide a tool for the purpose of improving the management of tasks associated with Agile projects. Agile projects are typically completed in an iterative manner with many short duration tasks being performed as part of iterations. These iterations are generally referred to as sprints. The objective of this work is to create a single tool that enables sprint teams to manage all of their tasks in multiple sprints and automatically produce all standard sprint performance charts with minimum effort. The format of the printed work is designed to mimic a standard Kanban board. The workmore » is developed as a single Excel file with worksheets capable of managing up to five concurrent sprints and up to one hundred tasks. It also includes a summary worksheet providing performance information from all active sprints. There are many commercial project management systems typically designed with features desired by larger organizations with many resources managing multiple programs and projects. The audience for this work is the small organizations and Agile project teams desiring an inexpensive, simple, user-friendly, task management tool. This work uses standard readily available software, Excel, requiring minimum data entry and automatically creating summary charts and performance data. It is formatted to print out and resemble standard flip charts and provide the visuals associated with this type of work.« less

  5. First tritium operation of ITER-prototype VUV spectroscopy on JET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coffey, I.H.; Barnsley, R.

    Results from tritium operation of the VUV survey spectrometer on the JET tokamak are presented. The instrument, located outside the biological shield and offset from a direct plasma 1-o-s for maximum radiation protection, was operational during the trace tritium campaign (TTE) at JET. No discernible increase in detector background noise levels were detected for total neutron rates of up to 1x10{sup 17}/s, demonstrating the shielding effectiveness of the configuration. Some tritium retention in the detector microchannel plate was measurable, but has not hampered subsequent operations. As a reference the unshielded detector of a close-coupled XUV instrument was operated during TTEmore » (the spectrometer itself was valved off from the JET vessel). This was exposed to neutron fluxes of {approx}10{sup 9}/cm{sup 2} s, in excess of those predicted for the corresponding instrument on ITER (10{sup 7}-10{sup 8}/cm{sup 2} s). A corresponding increase in the background level equivalent to {approx}5% of the detector dynamic range was measured. This demonstration of the shielding effectiveness of the SPRED configuration during DT operations, coupled with the tolerable noise levels measured in the SOXMOS detector, give confidence in the planned implementation of such instruments in ITER.« less

  6. Biosimilars: Key regulatory considerations and similarity assessment tools

    PubMed Central

    Wang, Xiao‐Zhuo Michelle; Conlon, Hugh D.; Anderson, Scott; Ryan, Anne M.; Bose, Arindam

    2017-01-01

    Abstract A biosimilar drug is defined in the US Food and Drug Administration (FDA) guidance document as a biopharmaceutical that is highly similar to an already licensed biologic product (referred to as the reference product) notwithstanding minor differences in clinically inactive components and for which there are no clinically meaningful differences in purity, potency, and safety between the two products. The development of biosimilars is a challenging, multistep process. Typically, the assessment of similarity involves comprehensive structural and functional characterization throughout the development of the biosimilar in an iterative manner and, if required by the local regulatory authority, an in vivo nonclinical evaluation, all conducted with direct comparison to the reference product. In addition, comparative clinical pharmacology studies are conducted with the reference product. The approval of biosimilars is highly regulated although varied across the globe in terms of nomenclature and the precise criteria for demonstrating similarity. Despite varied regulatory requirements, differences between the proposed biosimilar and the reference product must be supported by strong scientific evidence that these differences are not clinically meaningful. This review discusses the challenges faced by pharmaceutical companies in the development of biosimilars. PMID:28842986

  7. Use of Multi-class Empirical Orthogonal Function for Identification of Hydrogeological Parameters and Spatiotemporal Pattern of Multiple Recharges in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.

    2017-12-01

    This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.

  8. A Faceted Taxonomy for Rating Student Bibliographies in an Online Information Literacy Game

    ERIC Educational Resources Information Center

    Leeder, Chris; Markey, Karen; Yakel, Elizabeth

    2012-01-01

    This study measured the quality of student bibliographies through creation of a faceted taxonomy flexible and fine-grained enough to encompass the variety of online sources cited by today's students. The taxonomy was developed via interviews with faculty, iterative refinement of categories and scoring, and testing on example student…

  9. The Pursuit of Equality: Retaining Women in Information Technology

    ERIC Educational Resources Information Center

    Ehlert, Teresa

    2017-01-01

    This qualitative study employed a three-iteration classical Delphi design to determine consensus regarding retention strategies of women in the IT industry. There is a call for the information technology (IT) industry to hire and retain more women. Retaining such a valuable educated source would help fill the ever-rising need for skilled workers…

  10. Exploring Teacher Leadership in a Rural, Secondary School: Reciprocal Learning Teams as a Catalyst for Emergent Leadership

    ERIC Educational Resources Information Center

    Cherkowski, Sabre; Schnellert, Leyton

    2017-01-01

    The purpose of this case study was to examine how teachers experienced professional development as collaborative inquiry, and how their experiences contributed to their development as teacher leaders. Three overarching themes were identified through iterative qualitative analysis of multiple data sources including interviews, observations,…

  11. Passive polarimetric imagery-based material classification robust to illumination source position and viewpoint.

    PubMed

    Thilak Krishna, Thilakam Vimal; Creusere, Charles D; Voelz, David G

    2011-01-01

    Polarization, a property of light that conveys information about the transverse electric field orientation, complements other attributes of electromagnetic radiation such as intensity and frequency. Using multiple passive polarimetric images, we develop an iterative, model-based approach to estimate the complex index of refraction and apply it to target classification.

  12. Assessment of Preconditioner for a USM3D Hierarchical Adaptive Nonlinear Method (HANIM) (Invited)

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2016-01-01

    Enhancements to the previously reported mixed-element USM3D Hierarchical Adaptive Nonlinear Iteration Method (HANIM) framework have been made to further improve robustness, efficiency, and accuracy of computational fluid dynamic simulations. The key enhancements include a multi-color line-implicit preconditioner, a discretely consistent symmetry boundary condition, and a line-mapping method for the turbulence source term discretization. The USM3D iterative convergence for the turbulent flows is assessed on four configurations. The configurations include a two-dimensional (2D) bump-in-channel, the 2D NACA 0012 airfoil, a three-dimensional (3D) bump-in-channel, and a 3D hemisphere cylinder. The Reynolds Averaged Navier Stokes (RANS) solutions have been obtained using a Spalart-Allmaras turbulence model and families of uniformly refined nested grids. Two types of HANIM solutions using line- and point-implicit preconditioners have been computed. Additional solutions using the point-implicit preconditioner alone (PA) method that broadly represents the baseline solver technology have also been computed. The line-implicit HANIM shows superior iterative convergence in most cases with progressively increasing benefits on finer grids.

  13. Parameter selection with the Hotelling observer in linear iterative image reconstruction for breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Rose, Sean D.; Roth, Jacob; Zimmerman, Cole; Reiser, Ingrid; Sidky, Emil Y.; Pan, Xiaochuan

    2018-03-01

    In this work we investigate an efficient implementation of a region-of-interest (ROI) based Hotelling observer (HO) in the context of parameter optimization for detection of a rod signal at two orientations in linear iterative image reconstruction for DBT. Our preliminary results suggest that ROI-HO performance trends may be efficiently estimated by modeling only the 2D plane perpendicular to the detector and containing the X-ray source trajectory. In addition, the ROI-HO is seen to exhibit orientation dependent trends in detectability as a function of the regularization strength employed in reconstruction. To further investigate the ROI-HO performance in larger 3D system models, we present and validate an iterative methodology for calculating the ROI-HO. Lastly, we present a real data study investigating the correspondence between ROI-HO performance trends and signal conspicuity. Conspicuity of signals in real data reconstructions is seen to track well with trends in ROI-HO detectability. In particular, we observe orientation dependent conspicuity matching the orientation dependent detectability of the ROI-HO.

  14. Optimal wavefront estimation of incoherent sources

    NASA Astrophysics Data System (ADS)

    Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler

    2014-08-01

    Direct imaging is in general necessary to characterize exoplanets and disks. A coronagraph is an instrument used to create a dim (high-contrast) region in a star's PSF where faint companions can be detected. All coronagraphic high-contrast imaging systems use one or more deformable mirrors (DMs) to correct quasi-static aberrations and recover contrast in the focal plane. Simulations show that existing wavefront control algorithms can correct for diffracted starlight in just a few iterations, but in practice tens or hundreds of control iterations are needed to achieve high contrast. The discrepancy largely arises from the fact that simulations have perfect knowledge of the wavefront and DM actuation. Thus, wavefront correction algorithms are currently limited by the quality and speed of wavefront estimates. Exposures in space will take orders of magnitude more time than any calculations, so a nonlinear estimation method that needs fewer images but more computational time would be advantageous. In addition, current wavefront correction routines seek only to reduce diffracted starlight. Here we present nonlinear estimation algorithms that include optimal estimation of sources incoherent with a star such as exoplanets and debris disks.

  15. MOC Efficiency Improvements Using a Jacobi Inscatter Approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane; Collins, Benjamin; Kochunas, Brendan

    2016-08-31

    In recent weeks, attention has been given to resolving the convergence issues encountered with TCP 0 by trying a Jacobi (J) inscatter approach when group sweeping, where the inscatter source is constructed using the previous iteration flux. This is in contrast to a Gauss-Seidel (GS) approach, which has been the default to-date, where the scattering source uses the most up-to-date flux values. The former is consistent with CASMO, which has no issues with TCP 0 convergence. Testing this out on a variety of problems has demonstrated that the Jacobi approach does indeed provide substantially more stability, though can take moremore » outer iterations to converge. While this is not surprising, there are improvements that can be made to the MOC sweeper to capitalize on the Jacobi approximation and provide substantial speedup. For example, the loop over groups, which has traditionally been the outermost loop in MPACT, can be moved to the interior, avoiding duplicate modular ray trace and coarse ray trace setup (mapping coarse mesh surface indexes), which needs to be performed repeatedly when group is outermost.« less

  16. Application Research of Horn Array Multi-Beam Antenna in Reference Source System for Satellite Interference Location

    NASA Astrophysics Data System (ADS)

    Zhou, Ping; Lin, Hui; Zhang, Qi

    2018-01-01

    The reference source system is a key factor to ensure the successful location of the satellite interference source. Currently, the traditional system used a mechanical rotating antenna which leaded to the disadvantages of slow rotation and high failure-rate, which seriously restricted the system’s positioning-timeliness and became its obvious weaknesses. In this paper, a multi-beam antenna scheme based on the horn array was proposed as a reference source for the satellite interference location, which was used as an alternative to the traditional reference source antenna. The new scheme has designed a small circularly polarized horn antenna as an element and proposed a multi-beamforming algorithm based on planar array. Moreover, the simulation analysis of horn antenna pattern, multi-beam forming algorithm and simulated satellite link cross-ambiguity calculation have been carried out respectively. Finally, cross-ambiguity calculation of the traditional reference source system has also been tested. The comparison between the results of computer simulation and the actual test results shows that the scheme is scientific and feasible, obviously superior to the traditional reference source system.

  17. Development of FWIGPR, an open-source package for full-waveform inversion of common-offset GPR data

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.

    2017-12-01

    We introduce a package for full-waveform inversion (FWI) of Ground Penetrating Radar (GPR) data based on a combination of open-source programs. The FWI requires a good starting model, based on direct knowledge of field conditions or on traditional ray-based inversion methods. With a good starting model, the FWI can improve resolution of selected subsurface features. The package will be made available for general use in educational and research activities. The FWIGPR package consists of four main components: 3D to 2D data conversion, source wavelet estimation, forward modeling, and inversion. (These four components additionally require the development, by the user, of a good starting model.) A major challenge with GPR data is the unknown form of the waveform emitted by the transmitter held close to the ground surface. We apply a blind deconvolution method to estimate the source wavelet, based on a sparsity assumption about the reflectivity series of the subsurface model (Gholami and Sacchi 2012). The estimated wavelet is deconvolved from the data and the sparsest reflectivity series with fewest reflectors. The gprMax code (www.gprmax.com) is used as the forward modeling tool and the PEST parameter estimation package (www.pesthomepage.com) for the inversion. To reduce computation time, the field data are converted to an effective 2D equivalent, and the gprMax code can be run in 2D mode. In the first step, the user must create a good starting model of the data, presumably using ray-based methods. This estimated model will be introduced to the FWI process as an initial model. Next, the 3D data is converted to 2D, then the user estimates the source wavelet that best fits the observed data by sparsity assumption of the earth's response. Last, PEST runs gprMax with the initial model and calculates the misfit between the synthetic and observed data, and using an iterative algorithm calling gprMax several times ineach iteration, finds successive models that better fit the data. To gauge whether the iterative process has arrived at a local or global minima, the process can be repeated with a range of starting models. Tests have shown that this package can successfully improve estimates of selected subsurface model parameters for simple synthetic and real data. Ongoing research will focus on FWI of more complex scenarios.

  18. Archaeology: A Student's Guide to Reference Sources.

    ERIC Educational Resources Information Center

    Desautels, Almuth, Comp.

    This bibliography lists reference sources for research in archaeology. It is arranged in sections by type of reference source with subsections for general works and works covering specific areas. Categorized are handbooks; directories, biographies, and museums; encyclopedias; dictionaries; atlases; guides, manuals, and surveys; bibliographies; and…

  19. 3D algebraic iterative reconstruction for cone-beam x-ray differential phase-contrast computed tomography.

    PubMed

    Fu, Jian; Hu, Xinhua; Velroyen, Astrid; Bech, Martin; Jiang, Ming; Pfeiffer, Franz

    2015-01-01

    Due to the potential of compact imaging systems with magnified spatial resolution and contrast, cone-beam x-ray differential phase-contrast computed tomography (DPC-CT) has attracted significant interest. The current proposed FDK reconstruction algorithm with the Hilbert imaginary filter will induce severe cone-beam artifacts when the cone-beam angle becomes large. In this paper, we propose an algebraic iterative reconstruction (AIR) method for cone-beam DPC-CT and report its experiment results. This approach considers the reconstruction process as the optimization of a discrete representation of the object function to satisfy a system of equations that describes the cone-beam DPC-CT imaging modality. Unlike the conventional iterative algorithms for absorption-based CT, it involves the derivative operation to the forward projections of the reconstructed intermediate image to take into account the differential nature of the DPC projections. This method is based on the algebraic reconstruction technique, reconstructs the image ray by ray, and is expected to provide better derivative estimates in iterations. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a mini-focus x-ray tube source. It is shown that the proposed method can reduce the cone-beam artifacts and performs better than FDK under large cone-beam angles. This algorithm is of interest for future cone-beam DPC-CT applications.

  20. Establishing a celestial VLBI reference frame. 1: Searching for VLBI sources

    NASA Technical Reports Server (NTRS)

    Preston, R. A.; Morabito, D. D.; Williams, J. G.; Slade, M. A.; Harris, A. W.; Finley, S. G.; Skjerve, L. J.; Tanida, L.; Spitzmesser, D. J.; Johnson, B.

    1978-01-01

    The Deep Space Network is currently engaged in establishing a new high-accuracy VLBI celestial reference frame. The present status of the task of finding suitable celestial radio sources for constructing this reference frame is discussed. To date, 564 VLBI sources were detected, with 166 of these lying within 10 deg of the ecliptic plane. The variation of the sky distribution of these sources with source strength is examined.

  1. A first characterization of the NIO1 particle beam by means of a diagnostic calorimeter

    NASA Astrophysics Data System (ADS)

    Pimazzoni, A.; Cavenago, M.; Cervaro, V.; Fasolo, D.; Serianni, G.; Tollin, M.; Veltri, P.

    2017-08-01

    Powerful neutral beam injectors (NBI) are required as heating and current drive systems for tokamaks like ITER. The development of negative ion sources and accelerators (40 A; 1 MeV D- beam) in particular, is a crucial point and many issues still require a better understanding. In this framework, the experiment NIO1 (9 beamlets of 15 mA H- each, 60 kV) operated at Consorzio RFX started operation in 2014[1]. Both its RF negative ion source (up to 2.5 kW) and its beamline are equipped with many diagnostics [2]. For the early tests on the extraction system, oxygen has been used as well as hydrogen due to its higher electronegativity, which allows reaching currents large enough to test the beam diagnostics even without caesium injection. In particular a 1D-CFC (carbon-fibre-carbon composite) tile is used as a calorimeter to determine the beam power deposition by observing the rear surface of the tile with an infra-red camera; the same design is applied as for STRIKE [3], one of the diagnostics of SPIDER (the ITER-like ion source prototype [4]) whose facility is currently under construction at Consorzio RFX. From this diagnostic it is also possible to assess the beam divergence and thus the beam optics. The present contribution describes the characterization of the NIO1 particle beam by means of temperature and current measurements with different source and accelerator parameters.

  2. Preliminary results concerning the simulation of beam profiles from extracted ion current distributions for mini-STRIKE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agostinetti, P., E-mail: piero.agostinetti@igi.cnr.it; Serianni, G.; Veltri, P.

    The Radio Frequency (RF) negative hydrogen ion source prototype has been chosen for the ITER neutral beam injectors due to its optimal performances and easier maintenance demonstrated at Max-Planck-Institut für Plasmaphysik, Garching in hydrogen and deuterium. One of the key information to better understand the operating behavior of the RF ion sources is the extracted negative ion current density distribution. This distribution—influenced by several factors like source geometry, particle drifts inside the source, cesium distribution, and layout of cesium ovens—is not straightforward to be evaluated. The main outcome of the present contribution is the development of a minimization method tomore » estimate the extracted current distribution using the footprint of the beam recorded with mini-STRIKE (Short-Time Retractable Instrumented Kalorimeter). To accomplish this, a series of four computational models have been set up, where the output of a model is the input of the following one. These models compute the optics of the ion beam, evaluate the distribution of the heat deposited on the mini-STRIKE diagnostic calorimeter, and finally give an estimate of the temperature distribution on the back of mini-STRIKE. Several iterations with different extracted current profiles are necessary to give an estimate of the profile most compatible with the experimental data. A first test of the application of the method to the BAvarian Test Machine for Negative ions beam is given.« less

  3. Analysis of Radiation Transport Due to Activated Coolant in the ITER Neutral Beam Injection Cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royston, Katherine; Wilson, Stephen C.; Risner, Joel M.

    Detailed spatial distributions of the biological dose rate due to a variety of sources are required for the design of the ITER tokamak facility to ensure that all radiological zoning limits are met. During operation, water in the Integrated loop of Blanket, Edge-localized mode and vertical stabilization coils, and Divertor (IBED) cooling system will be activated by plasma neutrons and will flow out of the bioshield through a complex system of pipes and heat exchangers. This paper discusses the methods used to characterize the biological dose rate outside the tokamak complex due to 16N gamma radiation emitted by the activatedmore » coolant in the Neutral Beam Injection (NBI) cell of the tokamak building. Activated coolant will enter the NBI cell through the IBED Primary Heat Transfer System (PHTS), and the NBI PHTS will also become activated due to radiation streaming through the NBI system. To properly characterize these gamma sources, the production of 16N, the decay of 16N, and the flow of activated water through the coolant loops were modeled. The impact of conservative approximations on the solution was also examined. Once the source due to activated coolant was calculated, the resulting biological dose rate outside the north wall of the NBI cell was determined through the use of sophisticated variance reduction techniques. The AutomateD VAriaNce reducTion Generator (ADVANTG) software implements methods developed specifically to provide highly effective variance reduction for complex radiation transport simulations such as those encountered with ITER. Using ADVANTG with the Monte Carlo N-particle (MCNP) radiation transport code, radiation responses were calculated on a fine spatial mesh with a high degree of statistical accuracy. In conclusion, advanced visualization tools were also developed and used to determine pipe cell connectivity, to facilitate model checking, and to post-process the transport simulation results.« less

  4. Analysis of Radiation Transport Due to Activated Coolant in the ITER Neutral Beam Injection Cell

    DOE PAGES

    Royston, Katherine; Wilson, Stephen C.; Risner, Joel M.; ...

    2017-07-26

    Detailed spatial distributions of the biological dose rate due to a variety of sources are required for the design of the ITER tokamak facility to ensure that all radiological zoning limits are met. During operation, water in the Integrated loop of Blanket, Edge-localized mode and vertical stabilization coils, and Divertor (IBED) cooling system will be activated by plasma neutrons and will flow out of the bioshield through a complex system of pipes and heat exchangers. This paper discusses the methods used to characterize the biological dose rate outside the tokamak complex due to 16N gamma radiation emitted by the activatedmore » coolant in the Neutral Beam Injection (NBI) cell of the tokamak building. Activated coolant will enter the NBI cell through the IBED Primary Heat Transfer System (PHTS), and the NBI PHTS will also become activated due to radiation streaming through the NBI system. To properly characterize these gamma sources, the production of 16N, the decay of 16N, and the flow of activated water through the coolant loops were modeled. The impact of conservative approximations on the solution was also examined. Once the source due to activated coolant was calculated, the resulting biological dose rate outside the north wall of the NBI cell was determined through the use of sophisticated variance reduction techniques. The AutomateD VAriaNce reducTion Generator (ADVANTG) software implements methods developed specifically to provide highly effective variance reduction for complex radiation transport simulations such as those encountered with ITER. Using ADVANTG with the Monte Carlo N-particle (MCNP) radiation transport code, radiation responses were calculated on a fine spatial mesh with a high degree of statistical accuracy. In conclusion, advanced visualization tools were also developed and used to determine pipe cell connectivity, to facilitate model checking, and to post-process the transport simulation results.« less

  5. Extending the applicability of the Tkatchenko-Scheffler dispersion correction via iterative Hirshfeld partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bučko, Tomáš, E-mail: bucko@fns.uniba.sk; Department of Computational Materials Physics, Fakultät für Physik and Center for Computational Materials Science, Universität Wien, Sensengasse, Wien 1090; Lebègue, Sébastien, E-mail: sebastien.lebegue@univ-lorraine.fr

    2014-07-21

    Recently we have demonstrated that the applicability of the Tkatchenko-Scheffler (TS) method for calculating dispersion corrections to density-functional theory can be extended to ionic systems if the Hirshfeld method for estimating effective volumes and charges of atoms in molecules or solids (AIM’s) is replaced by its iterative variant [T. Bučko, S. Lebègue, J. Hafner, and J. Ángyán, J. Chem. Theory Comput. 9, 4293 (2013)]. The standard Hirshfeld method uses neutral atoms as a reference, whereas in the iterative Hirshfeld (HI) scheme the fractionally charged atomic reference states are determined self-consistently. We show that the HI method predicts more realistic AIMmore » charges and that the TS/HI approach leads to polarizabilities and C{sub 6} dispersion coefficients in ionic or partially ionic systems which are, as expected, larger for anions than for cations (in contrast to the conventional TS method). For crystalline materials, the new algorithm predicts polarizabilities per unit cell in better agreement with the values derived from the Clausius-Mosotti equation. The applicability of the TS/HI method has been tested for a wide variety of molecular and solid-state systems. It is demonstrated that for systems dominated by covalent interactions and/or dispersion forces the TS/HI method leads to the same results as the conventional TS approach. The difference between the TS/HI and TS approaches increases with increasing ionicity. A detailed comparison is presented for isoelectronic series of octet compounds, layered crystals, complex intermetallic compounds, and hydrides, and for crystals built of molecules or containing molecular anions. It is demonstrated that only the TS/HI method leads to accurate results for systems where both electrostatic and dispersion interactions are important, as illustrated for Li-intercalated graphite and for molecular adsorption on the surfaces in ionic solids and in the cavities of zeolites.« less

  6. Nuclear Forensic Inferences Using Iterative Multidimensional Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robel, M; Kristo, M J; Heller, M A

    2009-06-09

    Nuclear forensics involves the analysis of interdicted nuclear material for specific material characteristics (referred to as 'signatures') that imply specific geographical locations, production processes, culprit intentions, etc. Predictive signatures rely on expert knowledge of physics, chemistry, and engineering to develop inferences from these material characteristics. Comparative signatures, on the other hand, rely on comparison of the material characteristics of the interdicted sample (the 'questioned sample' in FBI parlance) with those of a set of known samples. In the ideal case, the set of known samples would be a comprehensive nuclear forensics database, a database which does not currently exist. Inmore » fact, our ability to analyze interdicted samples and produce an extensive list of precise materials characteristics far exceeds our ability to interpret the results. Therefore, as we seek to develop the extensive databases necessary for nuclear forensics, we must also develop the methods necessary to produce the necessary inferences from comparison of our analytical results with these large, multidimensional sets of data. In the work reported here, we used a large, multidimensional dataset of results from quality control analyses of uranium ore concentrate (UOC, sometimes called 'yellowcake'). We have found that traditional multidimensional techniques, such as principal components analysis (PCA), are especially useful for understanding such datasets and drawing relevant conclusions. In particular, we have developed an iterative partial least squares-discriminant analysis (PLS-DA) procedure that has proven especially adept at identifying the production location of unknown UOC samples. By removing classes which fell far outside the initial decision boundary, and then rebuilding the PLS-DA model, we have consistently produced better and more definitive attributions than with a single pass classification approach. Performance of the iterative PLS-DA method compared favorably to that of classification and regression tree (CART) and k nearest neighbor (KNN) algorithms, with the best combination of accuracy and robustness, as tested by classifying samples measured independently in our laboratories against the vendor QC based reference set.« less

  7. Negative hydrogen ion production in a helicon plasma source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santoso, J., E-mail: Jesse.Santoso@anu.edu.au; Corr, C. S.; Manoharan, R.

    2015-09-15

    In order to develop very high energy (>1 MeV) neutral beam injection systems for applications, such as plasma heating in fusion devices, it is necessary first to develop high throughput negative ion sources. For the ITER reference source, this will be realised using caesiated inductively coupled plasma devices, containing either hydrogen or deuterium discharges, operated with high rf input powers (up to 90 kW per driver). It has been suggested that due to their high power coupling efficiency, helicon devices may be able to reduce power requirements and potentially obviate the need for caesiation due to the high plasma densities achievable. Here,more » we present measurements of negative ion densities in a hydrogen discharge produced by a helicon device, with externally applied DC magnetic fields ranging from 0 to 8.5 mT at 5 and 10 mTorr fill pressures. These measurements were taken in the magnetised plasma interaction experiment at the Australian National University and were performed using the probe-based laser photodetachment technique, modified for the use in the afterglow of the plasma discharge. A peak in the electron density is observed at ∼3 mT and is correlated with changes in the rf power transfer efficiency. With increasing magnetic field, an increase in the negative ion fraction from 0.04 to 0.10 and negative ion densities from 8 × 10{sup 14 }m{sup −3} to 7 × 10{sup 15 }m{sup −3} is observed. It is also shown that the negative ion densities can be increased by a factor of 8 with the application of an external DC magnetic field.« less

  8. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    NASA Astrophysics Data System (ADS)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  9. Collimator-free photon tomography

    DOEpatents

    Dilmanian, F. Avraham; Barbour, Randall L.

    1998-10-06

    A method of uncollimated single photon emission computed tomography includes administering a radioisotope to a patient for producing gamma ray photons from a source inside the patient. Emissivity of the photons is measured externally of the patient with an uncollimated gamma camera at a plurality of measurement positions surrounding the patient for obtaining corresponding energy spectrums thereat. Photon emissivity at the plurality of measurement positions is predicted using an initial prediction of an image of the source. The predicted and measured photon emissivities are compared to obtain differences therebetween. Prediction and comparison is iterated by updating the image prediction until the differences are below a threshold for obtaining a final prediction of the source image.

  10. A study of the optimization method used in the NAVY/NASA gas turbine engine computer code

    NASA Technical Reports Server (NTRS)

    Horsewood, J. L.; Pines, S.

    1977-01-01

    Sources of numerical noise affecting the convergence properties of the Powell's Principal Axis Method of Optimization in the NAVY/NASA gas turbine engine computer code were investigated. The principal noise source discovered resulted from loose input tolerances used in terminating iterations performed in subroutine CALCFX to satisfy specified control functions. A minor source of noise was found to be introduced by an insufficient number of digits in stored coefficients used by subroutine THERM in polynomial expressions of thermodynamic properties. Tabular results of several computer runs are presented to show the effects on program performance of selective corrective actions taken to reduce noise.

  11. Jewish Studies: A Guide to Reference Sources.

    ERIC Educational Resources Information Center

    McGill Univ., Montreal (Quebec). McLennan Library.

    An annotated bibliography to the reference sources for Jewish Studies in the McLennan Library of McGill University (Canada) is presented. Any titles in Hebrew characters are listed by their transliterated equivalents. There is also a list of relevant Library of Congress Subject Headings. General reference sources listed are: encyclopedias,…

  12. Economics: A Guide to Reference Sources.

    ERIC Educational Resources Information Center

    Mason, Mary, Comp.

    Approximately 84 reference materials on economics located in the McLennan Library, McGill University (Montreal), are cited in this annotated bibliography. The bibliography serves to provide an overview of the printed bibliographic and reference sources useful for the study of economics. Financial and business sources and statistical compendia and…

  13. Advanced density profile reflectometry; the state-of-the-art and measurement prospects for ITER

    NASA Astrophysics Data System (ADS)

    Doyle, E. J.

    2006-10-01

    Dramatic progress in millimeter-wave technology has allowed the realization of a key goal for ITER diagnostics, the routine measurement of the plasma density profile from millimeter-wave radar (reflectometry) measurements. In reflectometry, the measured round-trip group delay of a probe beam reflected from a plasma cutoff is used to infer the density distribution in the plasma. Reflectometer systems implemented by UCLA on a number of devices employ frequency-modulated continuous-wave (FM-CW), ultrawide-bandwidth, high-resolution radar systems. One such system on DIII-D has routinely demonstrated measurements of the density profile over a range of electron density of 0-6.4x10^19,m-3, with ˜25 μs time and ˜4 mm radial resolution, meeting key ITER requirements. This progress in performance was made possible by multiple advances in the areas of millimeter-wave technology, novel measurement techniques, and improved understanding, including: (i) fast sweep, solid-state, wide bandwidth sources and power amplifiers, (ii) dual polarization measurements to expand the density range, (iii) adaptive radar-based data analysis with parallel processing on a Unix cluster, (iv) high memory depth data acquisition, and (v) advances in full wave code modeling. The benefits of advanced system performance will be illustrated using measurements from a wide range of phenomena, including ELM and fast-ion driven mode dynamics, L-H transition studies and plasma-wall interaction. The measurement capabilities demonstrated by these systems provide a design basis for the development of the main ITER profile reflectometer system. This talk will explore the extent to which these reflectometer system designs, results and experience can be translated to ITER, and will identify what new studies and experimental tests are essential.

  14. Diagnostic accuracy of second-generation dual-source computed tomography coronary angiography with iterative reconstructions: a real-world experience.

    PubMed

    Maffei, E; Martini, C; Rossi, A; Mollet, N; Lario, C; Castiglione Morelli, M; Clemente, A; Gentile, G; Arcadi, T; Seitun, S; Catalano, O; Aldrovandi, A; Cademartiri, F

    2012-08-01

    The authors evaluated the diagnostic accuracy of second-generation dual-source (DSCT) computed tomography coronary angiography (CTCA) with iterative reconstructions for detecting obstructive coronary artery disease (CAD). Between June 2010 and February 2011, we enrolled 160 patients (85 men; mean age 61.2±11.6 years) with suspected CAD. All patients underwent CTCA and conventional coronary angiography (CCA). For the CTCA scan (Definition Flash, Siemens), we use prospective tube current modulation and 70-100 ml of iodinated contrast material (Iomeprol 400 mgI/ ml, Bracco). Data sets were reconstructed with iterative reconstruction algorithm (IRIS, Siemens). CTCA and CCA reports were used to evaluate accuracy using the threshold for significant stenosis at ≥50% and ≥70%, respectively. No patient was excluded from the analysis. Heart rate was 64.3±11.9 bpm and radiation dose was 7.2±2.1 mSv. Disease prevalence was 30% (48/160). Sensitivity, specificity and positive and negative predictive values of CTCA in detecting significant stenosis were 90.1%, 93.3%, 53.2% and 99.1% (per segment), 97.5%, 91.2%, 61.4% and 99.6% (per vessel) and 100%, 83%, 71.6% and 100% (per patient), respectively. Positive and negative likelihood ratios at the per-patient level were 5.89 and 0.0, respectively. CTCA with second-generation DSCT in the real clinical world shows a diagnostic performance comparable with previously reported validation studies. The excellent negative predictive value and likelihood ratio make CTCA a first-line noninvasive method for diagnosing obstructive CAD.

  15. The SWATH Concept: Designing Superior Operability into a Surface Displacement Ship

    DTIC Science & Technology

    1975-12-01

    ACKNOWLEDGMENTS 140 REFERENCES 141 —^f* • ■ " mil,. .„.. • LIST OF FIGURES Page 1 — Artist’s Concept of a 4000-Ton ", WATH Combatant 4 2 ~ The...the design process for SWATH combatants is iterative. At either the feasibility or conceptual stage, the designer starts with a "reasonable" hull...parameters, the multitude of design factors and innumerable combinations thereof constitute a difficult synthesis problem. Because they are

  16. Task 7: ADPAC User's Manual

    NASA Technical Reports Server (NTRS)

    Hall, E. J.; Topp, D. A.; Delaney, R. A.

    1996-01-01

    The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields. The current version of the computer code resulting from this study is referred to as ADPAC (Advanced Ducted Propfan Analysis Codes-Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code developed under Tasks 6 and 7 of the NASA Contract. The ADPAC program is based on a flexible multiple- block grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. An iterative implicit algorithm is available for rapid time-dependent flow calculations, and an advanced two equation turbulence model is incorporated to predict complex turbulent flows. The consolidated code generated during this study is capable of executing in either a serial or parallel computing mode from a single source code. Numerous examples are given in the form of test cases to demonstrate the utility of this approach for predicting the aerodynamics of modem turbomachinery configurations.

  17. Thermal Structure and Dynamics of Saturn's Northern Springtime Disturbance

    NASA Technical Reports Server (NTRS)

    Fletcher, Leigh N.; Hesman, Brigette E.; Irwin, Patrick G.; Baines, Kevin H.; Momary, Thomas W.; SanchezLavega, Agustin; Flasar, F. Michael; Read, Peter L.; Orton, Glenn S.; SimonMiller, Amy; hide

    2011-01-01

    This article combined several infrared datasets to study the vertical properties of Saturn's northern springtime storm. Spectroscopic observations of Saturn's northern hemisphere at 0.5 and 2.5 / cm spectral resolution were provided by the Cassini Composite Infrared Spectrometer (CIRS, 17). These were supplemented with narrow-band filtered imaging from the ESO Very Large Telescope VISIR instrument (16) to provide a global spatial context for the Cassini spectroscopy. Finally, nightside imaging from the Cassini Visual and Infrared Mapping Spectrometer (VIMS, 22) provided a glimpse of the undulating cloud activity in the eastern branch of the disturbance. Each of these datasets, and the methods used to reduce and analyse them, will be described in detail below. Spatial maps of atmospheric temperatures, aerosol opacity and gaseous distributions are derived from infrared spectroscopy using a suite of radiative transfer and optimal estimation retrieval tools developed at the University of Oxford, known collectively as Nemesis (23). Synthetic spectra created from a reference atmospheric model for Saturn and appropriate sources of spectroscopic line data (6, 24) are convolved with the instrument function for each dataset. Atmospheric properties are then iteratively adjusted until the measurements are accurately reproduced with physically-realistic temperatures, compositions and cloud opacities.

  18. Image counter-forensics based on feature injection

    NASA Astrophysics Data System (ADS)

    Iuliani, M.; Rossetto, S.; Bianchi, T.; De Rosa, Alessia; Piva, A.; Barni, M.

    2014-02-01

    Starting from the concept that many image forensic tools are based on the detection of some features revealing a particular aspect of the history of an image, in this work we model the counter-forensic attack as the injection of a specific fake feature pointing to the same history of an authentic reference image. We propose a general attack strategy that does not rely on a specific detector structure. Given a source image x and a target image y, the adversary processes x in the pixel domain producing an attacked image ~x, perceptually similar to x, whose feature f(~x) is as close as possible to f(y) computed on y. Our proposed counter-forensic attack consists in the constrained minimization of the feature distance Φ(z) =│ f(z) - f(y)│ through iterative methods based on gradient descent. To solve the intrinsic limit due to the numerical estimation of the gradient on large images, we propose the application of a feature decomposition process, that allows the problem to be reduced into many subproblems on the blocks the image is partitioned into. The proposed strategy has been tested by attacking three different features and its performance has been compared to state-of-the-art counter-forensic methods.

  19. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most numerical integration methods.

  20. A Novel Real-Time Reference Key Frame Scan Matching Method

    PubMed Central

    Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu

    2017-01-01

    Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285

  1. Retention time alignment of LC/MS data by a divide-and-conquer algorithm.

    PubMed

    Zhang, Zhongqi

    2012-04-01

    Liquid chromatography-mass spectrometry (LC/MS) has become the method of choice for characterizing complex mixtures. These analyses often involve quantitative comparison of components in multiple samples. To achieve automated sample comparison, the components of interest must be detected and identified, and their retention times aligned and peak areas calculated. This article describes a simple pairwise iterative retention time alignment algorithm, based on the divide-and-conquer approach, for alignment of ion features detected in LC/MS experiments. In this iterative algorithm, ion features in the sample run are first aligned with features in the reference run by applying a single constant shift of retention time. The sample chromatogram is then divided into two shorter chromatograms, which are aligned to the reference chromatogram the same way. Each shorter chromatogram is further divided into even shorter chromatograms. This process continues until each chromatogram is sufficiently narrow so that ion features within it have a similar retention time shift. In six pairwise LC/MS alignment examples containing a total of 6507 confirmed true corresponding feature pairs with retention time shifts up to five peak widths, the algorithm successfully aligned these features with an error rate of 0.2%. The alignment algorithm is demonstrated to be fast, robust, fully automatic, and superior to other algorithms. After alignment and gap-filling of detected ion features, their abundances can be tabulated for direct comparison between samples.

  2. Direct design of aspherical lenses for extended non-Lambertian sources in two-dimensional geometry

    PubMed Central

    Wu, Rengmao; Hua, Hong; Benítez, Pablo; Miñano, Juan C.

    2016-01-01

    Illumination design for extended sources is very important for practical applications. The existing direct methods that are all developed for extended Lambertian sources are not applicable to extended non-Lambertian sources whose luminance is a function of position and direction. What we present in this Letter is to our knowledge the first direct method for extended non-Lambertian sources. In this method, the edge rays and the interior rays are both used, and the output intensity at a given direction is calculated to be the integral of the luminance function of all the outgoing rays at this direction. No cumbersome iterative illuminance compensation is needed. Two examples are presented to demonstrate the elegance of this method in prescribed intensity design for extended non-Lambertian sources in two-dimensional geometry. PMID:26125361

  3. Long pulse operation of the Kamaboko negative ion source on the MANTIS test bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tramham, R.; Jacquot, C.; Riz, D.

    1998-08-20

    Advanced Tokamak concepts and steady state plasma scenarios require external plasma heating and current drive for extended time periods. This poses several problems for the neutral beam injection systems that are currently in use. The power loading of the ion source and accelerator are especially problematic. The Kamaboko negative ion source, a small scale model of the ITER arc source, is being prepared for extended operation of deuterium beams for up to 1000 seconds. The operating conditions of the plasma grid prove to be important for reducing electron power loading of the accelerator. Operation of deuterium beams for extended periodsmore » also poses radiation safety risks which must be addressed.« less

  4. Russian History; A Guide to Reference Sources.

    ERIC Educational Resources Information Center

    McGill Univ., Montreal (Quebec). McLennan Library.

    This guide identifies reference sources for the study of Russian and Soviet history available in the McGill University (Montreal) McLennan Library. Russian, English, French, and German language works covering Russian history from its origins to World War II are included. The guide is arranged in two parts: general reference sources and…

  5. 10 CFR 70.39 - Specific licenses for the manufacture or initial transfer of calibration or reference sources.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Specific licenses for the manufacture or initial transfer... manufacture or initial transfer of calibration or reference sources. (a) An application for a specific license to manufacture or initially transfer calibration or reference sources containing plutonium, for...

  6. Preliminary study of fusion reactor: Solution of Grad Shapranov equation

    NASA Astrophysics Data System (ADS)

    Setiawan, Y.; Fermi, N.; Su'ud, Z.

    2012-06-01

    Nuclear fussion is prospective energy sources for the future due to the abundance of the fuel and can be categorized and clean energy sources. The problem is how to contain very hot plasma of temperature few hundreed million degrees safety and reliably. Tokamax type fussion reactors is considered as the most prospective concept. To analyze the plasma confining process and its movement Grad-Shavranov equation must be solved. This paper discuss about solution of Grad-Shavranov equation using Whittaker function. The formulation is then applied to the ITER design and example.

  7. Numerical simulations of the charged-particle flow dynamics for sources with a curved emission surface

    NASA Astrophysics Data System (ADS)

    Altsybeyev, V. V.

    2016-12-01

    The implementation of numerical methods for studying the dynamics of particle flows produced by pulsed sources is discussed. A particle tracking method with so-called gun iteration for simulations of beam dynamics is used. For the space charge limited emission problem, we suggest a Gauss law emission model for precise current-density calculation in the case of a curvilinear emitter. The results of numerical simulations of particle-flow formation for cylindrical bipolar diode and for diode with elliptical emitter are presented.

  8. High-resolution reconstruction for terahertz imaging.

    PubMed

    Xu, Li-Min; Fan, Wen-Hui; Liu, Jia

    2014-11-20

    We present a high-resolution (HR) reconstruction model and algorithms for terahertz imaging, taking advantage of super-resolution methodology and algorithms. The algorithms used include projection onto a convex sets approach, iterative backprojection approach, Lucy-Richardson iteration, and 2D wavelet decomposition reconstruction. Using the first two HR reconstruction methods, we successfully obtain HR terahertz images with improved definition and lower noise from four low-resolution (LR) 22×24 terahertz images taken from our homemade THz-TDS system at the same experimental conditions with 1.0 mm pixel. Using the last two HR reconstruction methods, we transform one relatively LR terahertz image to a HR terahertz image with decreased noise. This indicates potential application of HR reconstruction methods in terahertz imaging with pulsed and continuous wave terahertz sources.

  9. Apparatus and method for detecting gamma radiation

    DOEpatents

    Sigg, Raymond A.

    1994-01-01

    A high efficiency radiation detector for measuring X-ray and gamma radiation from small-volume, low-activity liquid samples with an overall uncertainty better than 0.7% (one sigma SD). The radiation detector includes a hyperpure germanium well detector, a collimator, and a reference source. The well detector monitors gamma radiation emitted by the reference source and a radioactive isotope or isotopes in a sample source. The radiation from the reference source is collimated to avoid attenuation of reference source gamma radiation by the sample. Signals from the well detector are processed and stored, and the stored data is analyzed to determine the radioactive isotope(s) content of the sample. Minor self-attenuation corrections are calculated from chemical composition data.

  10. Resolution, uncertainty and data predictability of tomographic Lg attenuation models—application to Southeastern China

    NASA Astrophysics Data System (ADS)

    Chen, Youlin; Xie, Jiakang

    2017-07-01

    We address two fundamental issues that pertain to Q tomography using high-frequency regional waves, particularly the Lg wave. The first issue is that Q tomography uses complex 'reduced amplitude data' as input. These data are generated by taking the logarithm of the product of (1) the observed amplitudes and (2) the simplified 1D geometrical spreading correction. They are thereby subject to 'modeling errors' that are dominated by uncompensated 3D structural effects; however, no knowledge of the statistical behaviour of these errors exists to justify the widely used least-squares methods for solving Q tomography. The second issue is that Q tomography has been solved using various iterative methods such as LSQR (Least-Squares QR, where QR refers to a QR factorization of a matrix into the product of an orthogonal matrix Q and an upper triangular matrix R) and SIRT (Simultaneous Iterative Reconstruction Technique) that do not allow for the quantitative estimation of model resolution and error. In this study, we conduct the first rigorous analysis of the statistics of the reduced amplitude data and find that the data error distribution is predominantly normal, but with long-tailed outliers. This distribution is similar to that of teleseismic traveltime residuals. We develop a screening procedure to remove outliers so that data closely follow a normal distribution. Next, we develop an efficient tomographic method based on the PROPACK software package to perform singular value decomposition on a data kernel matrix, which enables us to solve for the inverse, model resolution and covariance matrices along with the optimal Q model. These matrices permit for various quantitative model appraisals, including the evaluation of the formal resolution and error. Further, they allow formal uncertainty estimates of predicted data (Q) along future paths to be made at any specified confidence level. This new capability significantly benefits the practical missions of source identification and source size estimation, for which reliable uncertainty estimates are especially important. We apply the new methodologies to data from southeastern China to obtain a 1 Hz Lg Q model, which exhibits patterns consistent with what is known about the geology and tectonics of the region. We also solve for the site response model.

  11. SCIDAC Center for simulation of wave particle interactions CompX participation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, R.W.

    Harnessing the energy that is released in fusion reactions would provide a safe and abundant source of power to meet the growing energy needs of the world population. The next step toward the development of fusion as a practical energy source is the construction of ITER, a device capable of producing and controlling the high performance plasma required for self-sustaining fusion reactions, or “burning” plasma. The input power required to drive the ITER plasma into the burning regime will be supplied primarily with a combination of external power from radio frequency waves in the ion cyclotron range of frequencies andmore » energetic ions from neutral beam injection sources, in addition to internally generated Ohmic heating from the induced plasma current that also serves to create the magnetic equilibrium for the discharge. The ITER project is a large multi-billion dollar international project in which the US participates. The success of the ITER project depends critically on the ability to create and maintain burning plasma conditions, it is absolutely necessary to have physics-based models that can accurately simulate the RF processes that affect the dynamical evolution of the ITER discharge. The Center for Simulation of WavePlasma Interactions (CSWPI), also known as RF-SciDAC, is a multi-institutional collaboration that has conducted ongoing research aimed at developing: (1) Coupled core-to-edge simulations that will lead to an increased understanding of parasitic losses of the applied RF power in the boundary plasma between the RF antenna and the core plasma; (2) Development of models for core interactions of RF waves with energetic electrons and ions (including fusion alpha particles and fast neutral beam ions) that include a more accurate representation of the particle dynamics in the combined equilibrium and wave fields; and (3) Development of improved algorithms that will take advantage of massively parallel computing platforms at the petascale level and beyond to achieve the needed physics, resolution, and/or statistics to address these issues. CompX provides computer codes and analysis for the calculation of the electron and ion distributions in velocity-space and plasma radius which are necessary for reliable calculations of power deposition and toroidal current drive due to combined radiofrequency and neutral beam at high injected powers. It has also contributed to ray tracing modeling of injected radiofrequency powers, and to coupling between full-wave radiofrequency wave models and the distribution function calculations. In the course of this research, the Fokker-Planck distribution function calculation was made substantially more realistic by inclusion of finite-width drift-orbit effects (FOW). FOW effects were also implemented in a calculation of the phase-space diffusion resulting from radiofrequency full-wave models. Average level of funding for CompX was approximately three man-months per year.« less

  12. Calibration free beam hardening correction for cardiac CT perfusion imaging

    NASA Astrophysics Data System (ADS)

    Levi, Jacob; Fahmi, Rachid; Eck, Brendan L.; Fares, Anas; Wu, Hao; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    Myocardial perfusion imaging using CT (MPI-CT) and coronary CTA have the potential to make CT an ideal noninvasive gate-keeper for invasive coronary angiography. However, beam hardening artifacts (BHA) prevent accurate blood flow calculation in MPI-CT. BH Correction (BHC) methods require either energy-sensitive CT, not widely available, or typically a calibration-based method. We developed a calibration-free, automatic BHC (ABHC) method suitable for MPI-CT. The algorithm works with any BHC method and iteratively determines model parameters using proposed BHA-specific cost function. In this work, we use the polynomial BHC extended to three materials. The image is segmented into soft tissue, bone, and iodine images, based on mean HU and temporal enhancement. Forward projections of bone and iodine images are obtained, and in each iteration polynomial correction is applied. Corrections are then back projected and combined to obtain the current iteration's BHC image. This process is iterated until cost is minimized. We evaluate the algorithm on simulated and physical phantom images and on preclinical MPI-CT data. The scans were obtained on a prototype spectral detector CT (SDCT) scanner (Philips Healthcare). Mono-energetic reconstructed images were used as the reference. In the simulated phantom, BH streak artifacts were reduced from 12+/-2HU to 1+/-1HU and cupping was reduced by 81%. Similarly, in physical phantom, BH streak artifacts were reduced from 48+/-6HU to 1+/-5HU and cupping was reduced by 86%. In preclinical MPI-CT images, BHA was reduced from 28+/-6 HU to less than 4+/-4HU at peak enhancement. Results suggest that the algorithm can be used to reduce BHA in conventional CT and improve MPI-CT accuracy.

  13. Reliability of functional MR imaging with word-generation tasks for mapping Broca's area.

    PubMed

    Brannen, J H; Badie, B; Moritz, C H; Quigley, M; Meyerand, M E; Haughton, V M

    2001-10-01

    Functional MR (fMR) imaging of word generation has been used to map Broca's area in some patients selected for craniotomy. The purpose of this study was to measure the reliability, precision, and accuracy of word-generation tasks to identify Broca's area. The Brodmann areas activated during performance of word-generation tasks were tabulated in 34 consecutive patients referred for fMR imaging mapping of language areas. In patients performing two iterations of the letter word-generation tasks, test-retest reliability was quantified by using the concurrence ratio (CR), or the number of voxels activated by each iteration in proportion to the average number of voxels activated from both iterations of the task. Among patients who also underwent category or antonym word generation or both, the similarity of the activation from each task was assessed with the CR. In patients who underwent electrocortical stimulation (ECS) mapping of speech function during craniotomy while awake, the sites with speech function were compared with the locations of activation found during fMR imaging of word generation. In 31 of 34 patients, activation was identified in the inferior frontal gyri or middle frontal gyri or both in Brodmann areas 9, 44, 45, or 46, unilaterally or bilaterally, with one or more of the tasks. Activation was noted in the same gyri when the patient performed a second iteration of the letter word-generation task or second task. The CR for pixel precision in a single section averaged 49%. In patients who underwent craniotomy while awake, speech areas located with ECS coincided with areas of the brain activated during a word-generation task. fMR imaging with word-generation tasks produces technically satisfactory maps of Broca's area, which localize the area accurately and reliably.

  14. Evaluation of hybrid SART  +  OS  +  TV iterative reconstruction algorithm for optical-CT gel dosimeter imaging

    NASA Astrophysics Data System (ADS)

    Du, Yi; Wang, Xiangang; Xiang, Xincheng; Wei, Zhouping

    2016-12-01

    Optical computed tomography (optical-CT) is a high-resolution, fast, and easily accessible readout modality for gel dosimeters. This paper evaluates a hybrid iterative image reconstruction algorithm for optical-CT gel dosimeter imaging, namely, the simultaneous algebraic reconstruction technique (SART) integrated with ordered subsets (OS) iteration and total variation (TV) minimization regularization. The mathematical theory and implementation workflow of the algorithm are detailed. Experiments on two different optical-CT scanners were performed for cross-platform validation. For algorithm evaluation, the iterative convergence is first shown, and peak-to-noise-ratio (PNR) and contrast-to-noise ratio (CNR) results are given with the cone-beam filtered backprojection (FDK) algorithm and the FDK results followed by median filtering (mFDK) as reference. The effect on spatial gradients and reconstruction artefacts is also investigated. The PNR curve illustrates that the results of SART  +  OS  +  TV finally converges to that of FDK but with less noise, which implies that the dose-OD calibration method for FDK is also applicable to the proposed algorithm. The CNR in selected regions-of-interest (ROIs) of SART  +  OS  +  TV results is almost double that of FDK and 50% higher than that of mFDK. The artefacts in SART  +  OS  +  TV results are still visible, but have been much suppressed with little spatial gradient loss. Based on the assessment, we can conclude that this hybrid SART  +  OS  +  TV algorithm outperforms both FDK and mFDK in denoising, preserving spatial dose gradients and reducing artefacts, and its effectiveness and efficiency are platform independent.

  15. Ultralow dose dentomaxillofacial CT imaging and iterative reconstruction techniques: variability of Hounsfield units and contrast-to-noise ratio

    PubMed Central

    Bischel, Alexander; Stratis, Andreas; Kakar, Apoorv; Bosmans, Hilde; Jacobs, Reinhilde; Gassner, Eva-Maria; Puelacher, Wolfgang; Pauwels, Ruben

    2016-01-01

    Objective: The aim of this study was to evaluate whether application of ultralow dose protocols and iterative reconstruction technology (IRT) influence quantitative Hounsfield units (HUs) and contrast-to-noise ratio (CNR) in dentomaxillofacial CT imaging. Methods: A phantom with inserts of five types of materials was scanned using protocols for (a) a clinical reference for navigated surgery (CT dose index volume 36.58 mGy), (b) low-dose sinus imaging (18.28 mGy) and (c) four ultralow dose imaging (4.14, 2.63, 0.99 and 0.53 mGy). All images were reconstructed using: (i) filtered back projection (FBP); (ii) IRT: adaptive statistical iterative reconstruction-50 (ASIR-50), ASIR-100 and model-based iterative reconstruction (MBIR); and (iii) standard (std) and bone kernel. Mean HU, CNR and average HU error after recalibration were determined. Each combination of protocols was compared using Friedman analysis of variance, followed by Dunn's multiple comparison test. Results: Pearson's sample correlation coefficients were all >0.99. Ultralow dose protocols using FBP showed errors of up to 273 HU. Std kernels had less HU variability than bone kernels. MBIR reduced the error value for the lowest dose protocol to 138 HU and retained the highest relative CNR. ASIR could not demonstrate significant advantages over FBP. Conclusions: Considering a potential dose reduction as low as 1.5% of a std protocol, ultralow dose protocols and IRT should be further tested for clinical dentomaxillofacial CT imaging. Advances in knowledge: HU as a surrogate for bone density may vary significantly in CT ultralow dose imaging. However, use of std kernels and MBIR technology reduce HU error values and may retain the highest CNR. PMID:26859336

  16. The contribution of different information sources for adverse effects data.

    PubMed

    Golder, Su; Loke, Yoon K

    2012-04-01

    The aim of this study is to determine the relative value and contribution of searching different sources to identify adverse effects data. The process of updating a systematic review and meta-analysis of thiazolidinedione-related fractures in patients with type 2 diabetes mellitus was used as a case study. For each source searched, a record was made for each relevant reference included in the review noting whether it was retrieved with the search strategy used and whether it was available but not retrieved. The sensitivity, precision, and number needed to read from searching each source and from different combinations of sources were also calculated. There were 58 relevant references which presented sufficient numerical data to be included in a meta-analysis of fractures and bone mineral density. The highest number of relevant references were retrieved from Science Citation Index (SCI) (35), followed by BIOSIS Previews (27) and EMBASE (24). The precision of the searches varied from 0.88% (Scirus) to 41.67% (CENTRAL). With the search strategies used, the minimum combination of sources required to retrieve all the relevant references was; the GlaxoSmithKline (GSK) website, Science Citation Index (SCI), EMBASE, BIOSIS Previews, British Library Direct, Medscape DrugInfo, handsearching and reference checking, AHFS First, and Thomson Reuters Integrity or Conference Papers Index (CPI). In order to identify all the relevant references for this case study a number of different sources needed to be searched. The minimum combination of sources required to identify all the relevant references did not include MEDLINE.

  17. SAIP2014, the 59th Annual Conference of the South African Institute of Physics

    NASA Astrophysics Data System (ADS)

    Engelbrecht, Chris; Karataglidis, Steven

    2015-04-01

    The International Celestial Reference Frame (ICRF) was adopted by the International Astronomical Union (IAU) in 1997. The current standard, the ICRF-2, is based on Very Long Baseline Interferometric (VLBI) radio observations of positions of 3414 extragalactic radio reference sources. The angular resolution achieved by the VLBI technique is on a scale of milliarcsecond to sub-milliarcseconds and defines the ICRF with the highest accuracy available at present. An ideal reference source used for celestial reference frame work should be unresolved or point-like on these scales. However, extragalactic radio sources, such as those that definevand maintain the ICRF, can exhibit spatially extended structures on sub-milliarsecond scalesvthat may vary both in time and frequency. This variability can introduce a significant error in the VLBI measurements thereby degrading the accuracy of the estimated source position. Reference source density in the Southern celestial hemisphere is also poor compared to the Northern hemisphere, mainly due to the limited number of radio telescopes in the south. In order to dene the ICRF with the highest accuracy, observational efforts are required to find more compact sources and to monitor their structural evolution. In this paper we show that the astrometric VLBI sessions can be used to obtain source structure information and we present preliminary imaging results for the source J1427-4206 at 2.3 and 8.4 GHz frequencies which shows that the source is compact and suitable as a reference source.

  18. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  19. Central magnetic anomalies of Nectarian-aged lunar impact basins: Probable evidence for an early core dynamo

    NASA Astrophysics Data System (ADS)

    Hood, Lon L.

    2011-02-01

    A re-examination of all available low-altitude LP magnetometer data confirms that magnetic anomalies are present in at least four Nectarian-aged lunar basins: Moscoviense, Mendel-Rydberg, Humboldtianum, and Crisium. In three of the four cases, a single main anomaly is present near the basin center while, in the case of Crisium, anomalies are distributed in a semi-circular arc about the basin center. These distributions, together with a lack of other anomalies near the basins, indicate that the sources of the anomalies are genetically associated with the respective basin-forming events. These central basin anomalies are difficult to attribute to shock remanent magnetization of a shocked central uplift and most probably imply thermoremanent magnetization of impact melt rocks in a steady magnetizing field. Iterative forward modeling of the single strongest and most isolated anomaly, the northern Crisium anomaly, yields a paleomagnetic pole position at 81° ± 19°N, 143° ± 31°E, not far from the present rotational pole. Assuming no significant true polar wander since the Crisium impact, this position is consistent with that expected for a core dynamo magnetizing field. Further iterative forward modeling demonstrates that the remaining Crisium anomalies can be approximately simulated assuming a multiple source model with a single magnetization direction equal to that inferred for the northernmost anomaly. This result is most consistent with a steady, large-scale magnetizing field. The inferred mean magnetization intensity within the strongest basin sources is ˜1 A/m assuming a 1-km thickness for the source layer. Future low-altitude orbital and surface magnetometer measurements will more strongly constrain the depth and/or thicknesses of the sources.

  20. Design, installation, commissioning and operation of a beamlet monitor in the negative ion beam test stand at NIFS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoni, V.; Agostinetti, P.; Brombin, M.

    2015-04-08

    In the framework of the accompanying activity for the development of the two neutral beam injectors for the ITER fusion experiment, an instrumented beam calorimeter is being designed at Consorzio RFX, to be used in the SPIDER test facility (particle energy 100keV; beam current 50A), with the aim of testing beam characteristics and to verify the source proper operation. The main components of the instrumented calorimeter are one-directional carbon-fibre-carbon composite tiles. Some prototype tiles have been used as a small-scale version of the entire calorimeter in the test stand of the neutral beam injectors of the LHD experiment, with themore » aim of characterising the beam features in various operating conditions. The extraction system of the NIFS test stand source was modified, by applying a mask to the first gridded electrode, in order to isolate only a subset of the beamlets, arranged in two 3×5 matrices, resembling the beamlet groups of the ITER beam sources. The present contribution gives a description of the design of the diagnostic system, including the numerical simulations of the expected thermal pattern. Moreover the dedicated thermocouple measurement system is presented. The beamlet monitor was successfully used for a full experimental campaign, during which the main parameters of the source, mainly the arc power and the grid voltages, were varied. This contribution describes the methods of fitting and data analysis applied to the infrared images of the camera to recover the beamlet optics characteristics, in order to quantify the response of the system to different operational conditions. Some results concerning the beamlet features are presented as a function of the source parameters.« less

  1. Resolving the problem of multiple accessions of the same transcript deposited across various public databases.

    PubMed

    Weirick, Tyler; John, David; Uchida, Shizuka

    2017-03-01

    Maintaining the consistency of genomic annotations is an increasingly complex task because of the iterative and dynamic nature of assembly and annotation, growing numbers of biological databases and insufficient integration of annotations across databases. As information exchange among databases is poor, a 'novel' sequence from one reference annotation could be annotated in another. Furthermore, relationships to nearby or overlapping annotated transcripts are even more complicated when using different genome assemblies. To better understand these problems, we surveyed current and previous versions of genomic assemblies and annotations across a number of public databases containing long noncoding RNA. We identified numerous discrepancies of transcripts regarding their genomic locations, transcript lengths and identifiers. Further investigation showed that the positional differences between reference annotations of essentially the same transcript could lead to differences in its measured expression at the RNA level. To aid in resolving these problems, we present the algorithm 'Universal Genomic Accession Hash (UGAHash)' and created an open source web tool to encourage the usage of the UGAHash algorithm. The UGAHash web tool (http://ugahash.uni-frankfurt.de) can be accessed freely without registration. The web tool allows researchers to generate Universal Genomic Accessions for genomic features or to explore annotations deposited in the public databases of the past and present versions. We anticipate that the UGAHash web tool will be a valuable tool to check for the existence of transcripts before judging the newly discovered transcripts as novel. © The Author 2016. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  2. Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom

    NASA Astrophysics Data System (ADS)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J.; Spadinger, Ingrid

    2016-01-01

    The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest.

  3. 10 CFR 32.102 - Schedule C-prototype tests for calibration or reference sources containing americium-241 or...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Schedule C-prototype tests for calibration or reference... Licensed Items § 32.102 Schedule C—prototype tests for calibration or reference sources containing..., conduct prototype tests, in the order listed, on each of five prototypes of the source, which contains...

  4. Study and comparison of different sensitivity models for a two-plane Compton camera.

    PubMed

    Muñoz, Enrique; Barrio, John; Bernabéu, José; Etxebeste, Ane; Lacasta, Carlos; Llosá, Gabriela; Ros, Ana; Roser, Jorge; Oliver, Josep F

    2018-06-25

    Given the strong variations in the sensitivity of Compton cameras for the detection of events originating from different points in the field of view (FoV), sensitivity correction is often necessary in Compton image reconstruction. Several approaches for the calculation of the sensitivity matrix have been proposed in the literature. While most of these models are easily implemented and can be useful in many cases, they usually assume high angular coverage over the scattered photon, which is not the case for our prototype. In this work, we have derived an analytical model that allows us to calculate a detailed sensitivity matrix, which has been compared to other sensitivity models in the literature. Specifically, the proposed model describes the probability of measuring a useful event in a two-plane Compton camera, including the most relevant physical processes involved. The model has been used to obtain an expression for the system and sensitivity matrices for iterative image reconstruction. These matrices have been validated taking Monte Carlo simulations as a reference. In order to study the impact of the sensitivity, images reconstructed with our sensitivity model and with other models have been compared. Images have been reconstructed from several simulated sources, including point-like sources and extended distributions of activity, and also from experimental data measured with 22 Na sources. Results show that our sensitivity model is the best suited for our prototype. Although other models in the literature perform successfully in many scenarios, they are not applicable in all the geometrical configurations of interest for our system. In general, our model allows to effectively recover the intensity of point-like sources at different positions in the FoV and to reconstruct regions of homogeneous activity with minimal variance. Moreover, it can be employed for all Compton camera configurations, including those with low angular coverage over the scatterer.

  5. Numerical simulation of an elastic structure behavior under transient fluid flow excitation

    NASA Astrophysics Data System (ADS)

    Afanasyeva, Irina N.; Lantsova, Irina Yu.

    2017-01-01

    This paper deals with the verification of a numerical technique of modeling fluid-structure interaction (FSI) problems. The configuration consists of incompressible viscous fluid around an elastic structure in the channel. External flow is laminar. Multivariate calculations are performed using special software ANSYS CFX and ANSYS Mechanical. Different types of parameters of mesh deformation and solver controls (time step, under relaxation factor, number of iterations at coupling step) were tested. The results are presented in tables and plots in comparison with reference data.

  6. A Low Cost Navigation Microprocessor System.

    DTIC Science & Technology

    1977-03-01

    a., 77843. uaed to iteratively update hi ~~~~~~~~~ -~~~- .:S~~~~~~~~~~~~~ . .— a — ~~~~~~~~~. - ~~~~ - . .- . — --- •5 n- -pr~~~ I~~~ ‘77 PIC ~~DU...the altitude above the reference ellipsoid (Ian — feature is implemented as step (6) below . nude —longitude—altitude coordinate system). Since cal

  7. Proceedings from the U.S. Army Corps of Engineers (USACE) and the National Oceanic and Atmospheric Administration (NOAA) Engineering With Nature Workshop

    DTIC Science & Technology

    2017-03-01

    opportunities emerged. It will be essential to capture and share lessons learned as the two organizations plan and implement selected EWN projects...their top five or six opportunities and subsequently selected the two highest priorities. Each of the three breakout groups then worked together to...will ensure agency buy-in, establish local reference sites, and promote EWN principles. Site selection will include an iterative process that factors

  8. Radiation Hardness Assurance (RHA) Guideline

    NASA Technical Reports Server (NTRS)

    Campola, Michael J.

    2016-01-01

    Radiation Hardness Assurance (RHA) consists of all activities undertaken to ensure that the electronics and materials of a space system perform to their design specifications after exposure to the mission space environment. The subset of interests for NEPP and the REAG, are EEE parts. It is important to register that all of these undertakings are in a feedback loop and require constant iteration and updating throughout the mission life. More detail can be found in the reference materials on applicable test data for usage on parts.

  9. Automatic knee cartilage delineation using inheritable segmentation

    NASA Astrophysics Data System (ADS)

    Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.

    2008-03-01

    We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.

  10. 1 Tbit/inch2 Recording in Angular-Multiplexing Holographic Memory with Constant Signal-to-Scatter Ratio Schedule

    NASA Astrophysics Data System (ADS)

    Hosaka, Makoto; Ishii, Toshiki; Tanaka, Asato; Koga, Shogo; Hoshizawa, Taku

    2013-09-01

    We developed an iterative method for optimizing the exposure schedule to obtain a constant signal-to-scatter ratio (SSR) to accommodate various recording conditions and achieve high-density recording. 192 binary images were recorded in the same location of a medium in approximately 300×300 µm2 using an experimental system embedded with a blue laser diode with a 405 nm wavelength and an objective lens with a 0.85 numerical aperture. The recording density of this multiplexing corresponds to 1 Tbit/in.2. The recording exposure time was optimized through the iteration of a three-step sequence consisting of total reproduced intensity measurement, target signal calculation, and recording energy density calculation. The SSR of pages recorded with this method was almost constant throughout the entire range of the reference beam angle. The signal-to-noise ratio of the sampled pages was over 2.9 dB, which is higher than the reproducible limit of 1.5 dB in our experimental system.

  11. Bridging single and multireference coupled cluster theories with universal state selective formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaskaran-Nair, Kiran; Kowalski, Karol

    2013-05-28

    The universal state selective (USS) multireference approach is used to construct new energy functionals which offers a unique possibility of bridging single and multireference coupled cluster theories (SR/MRCC). These functionals, which can be used to develop iterative and non-iterative approaches, utilize a special form of the trial wavefunctions, which assure additive separability (or size-consistency) of the USS energies in the non-interacting subsystem limit. When the USS formalism is combined with approximate SRCC theories, the resulting formalism can be viewed as a size-consistent version of the method of moments of coupled cluster equations (MMCC) employing a MRCC trial wavefunction. Special casesmore » of the USS formulations, which utilize single reference state specific CC (V.V. Ivanov, D.I. Lyakh, L. Adamowicz, Phys. Chem. Chem. Phys. 11, 2355 (2009)) and tailored CC (T. Kinoshita, O. Hino, R.J. Bartlett, J. Chem. Phys. 123, 074106 (2005)) expansions are also discussed.« less

  12. Iterants, Fermions and Majorana Operators

    NASA Astrophysics Data System (ADS)

    Kauffman, Louis H.

    Beginning with an elementary, oscillatory discrete dynamical system associated with the square root of minus one, we study both the foundations of mathematics and physics. Position and momentum do not commute in our discrete physics. Their commutator is related to the diffusion constant for a Brownian process and to the Heisenberg commutator in quantum mechanics. We take John Wheeler's idea of It from Bit as an essential clue and we rework the structure of that bit to a logical particle that is its own anti-particle, a logical Marjorana particle. This is our key example of the amphibian nature of mathematics and the external world. We show how the dynamical system for the square root of minus one is essentially the dynamics of a distinction whose self-reference leads to both the fusion algebra and the operator algebra for the Majorana Fermion. In the course of this, we develop an iterant algebra that supports all of matrix algebra and we end the essay with a discussion of the Dirac equation based on these principles.

  13. WE-G-BRF-07: Non-Circular Scanning Trajectories with Varian Developer Mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A; Pearson, E; Pan, X

    2014-06-15

    Purpose: Cone-beam CT (CBCT) in image-guide radiation therapy (IGRT) typicallyacquires scan data via the circular trajectory of the linearaccelerator's (linac) gantry rotation. Though this lends itself toanalytic reconstruction algorithms like FDK, iterative reconstructionalgorithms allow for a broader range of scanning trajectories. Weimplemented a non-circular scanning trajectory with Varian's TrueBeamDeveloper Mode and performed some preliminary reconstructions toverify the geometry. Methods: We used TrueBeam Developer Mode to program a new scanning trajectorythat increases the field of view (FOV) along the gantry rotation axiswithout moving the patient. This trajectory consisted of moving thegantry in a circle, then translating the source and detector alongmore » theaxial direction before acquiring another circular scan 19 cm away fromthe first. The linear portion of the trajectory includes an additional4.5 cm above and below the axial planes of the source's circularrotation. We scanned a calibration phantom consisting of a lucite tubewith a spiral pattern of CT spots and used the maximum-likelihoodalgorithm to iteratively reconstruct the CBCT volume. Results: With the TrueBeam trajectory definition, we acquired projection dataof the calibration phantom using the previously described trajectory.We obtained a scan of the treatment couch for log normalization byscanning with the same trajectory but without the phantom present.Using the nominal geometric parameters reported in the projectionheaders with our iterative reconstruction algorithm, we obtained acorrect reconstruction of the calibration phantom. Conclusion: The ability to implement new scanning trajectories with the TrueBeamDeveloper Mode enables us access to a new parameter space for imagingwith CBCT for IGRT. Previous simulations and simple dual circle scanshave shown iterative reconstruction with non-circular trajectories canincrease the axial FOV with CBCT. Use of Developer Mode allowsexperimentally testing these and other new scanning trajectories. Support was provided in part by the University of Chicago Research Computing Center, Varian Medical Systems, and NIH Grants 1RO1CA120540, T32EB002103, S10 RR021039 and P30 CA14599. The contents of this work are solely the responsibility of the authors and do not necessarily represent the official views of the supporting organizations.« less

  14. Apparatus and method for detecting gamma radiation

    DOEpatents

    Sigg, R.A.

    1994-12-13

    A high efficiency radiation detector is disclosed for measuring X-ray and gamma radiation from small-volume, low-activity liquid samples with an overall uncertainty better than 0.7% (one sigma SD). The radiation detector includes a hyperpure germanium well detector, a collimator, and a reference source. The well detector monitors gamma radiation emitted by the reference source and a radioactive isotope or isotopes in a sample source. The radiation from the reference source is collimated to avoid attenuation of reference source gamma radiation by the sample. Signals from the well detector are processed and stored, and the stored data is analyzed to determine the radioactive isotope(s) content of the sample. Minor self-attenuation corrections are calculated from chemical composition data. 4 figures.

  15. Refactoring and Its Benefits

    NASA Astrophysics Data System (ADS)

    Veerraju, R. P. S. P.; Rao, A. Srinivasa; Murali, G.

    2010-10-01

    Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. It improves internal code structure without altering its external functionality by transforming functions and rethinking algorithms. It is an iterative process. Refactoring include reducing scope, replacing complex instructions with simpler or built-in instructions, and combining multiple statements into one statement. By transforming the code with refactoring techniques it will be faster to change, execute, and download. It is an excellent best practice to adopt for programmers wanting to improve their productivity. Refactoring is similar to things like performance optimizations, which are also behavior- preserving transformations. It also helps us find bugs when we are trying to fix a bug in difficult-to-understand code. By cleaning things up, we make it easier to expose the bug. Refactoring improves the quality of application design and implementation. In general, three cases concerning refactoring. Iterative refactoring, Refactoring when is necessary, Not refactor. Mr. Martin Fowler identifies four key reasons to refractor. Refactoring improves the design of software, makes software easier to understand, helps us find bugs and also helps in executing the program faster. There is an additional benefit of refactoring. It changes the way a developer thinks about the implementation when not refactoring. There are the three types of refactorings. 1) Code refactoring: It often referred to simply as refactoring. This is the refactoring of programming source code. 2) Database refactoring: It is a simple change to a database schema that improves its design while retaining both its behavioral and informational semantics. 3) User interface (UI) refactoring: It is a simple change to the UI which retains its semantics. Finally, we conclude the benefits of Refactoring are: Improves the design of software, Makes software easier to understand, Software gets cleaned up and Helps us to find bugs and Helps us to program faster.

  16. An evaluation of the structural validity of the shoulder pain and disability index (SPADI) using the Rasch model.

    PubMed

    Jerosch-Herold, Christina; Chester, Rachel; Shepstone, Lee; Vincent, Joshua I; MacDermid, Joy C

    2018-02-01

    The shoulder pain and disability index (SPADI) has been extensively evaluated for its psychometric properties using classical test theory (CTT). The purpose of this study was to evaluate its structural validity using Rasch model analysis. Responses to the SPADI from 1030 patients referred for physiotherapy with shoulder pain and enrolled in a prospective cohort study were available for Rasch model analysis. Overall fit, individual person and item fit, response format, dependence, unidimensionality, targeting, reliability and differential item functioning (DIF) were examined. The SPADI pain subscale initially demonstrated a misfit due to DIF by age and gender. After iterative analysis it showed good fit to the Rasch model with acceptable targeting and unidimensionality (overall fit Chi-square statistic 57.2, p = 0.1; mean item fit residual 0.19 (1.5) and mean person fit residual 0.44 (1.1); person separation index (PSI) of 0.83. The disability subscale however shows significant misfit due to uniform DIF even after iterative analyses were used to explore different solutions to the sources of misfit (overall fit (Chi-square statistic 57.2, p = 0.1); mean item fit residual 0.54 (1.26) and mean person fit residual 0.38 (1.0); PSI 0.84). Rasch Model analysis of the SPADI has identified some strengths and limitations not previously observed using CTT methods. The SPADI should be treated as two separate subscales. The SPADI is a widely used outcome measure in clinical practice and research; however, the scores derived from it must be interpreted with caution. The pain subscale fits the Rasch model expectations well. The disability subscale does not fit the Rasch model and its current format does not meet the criteria for true interval-level measurement required for use as a primary endpoint in clinical trials. Clinicians should therefore exercise caution when interpreting score changes on the disability subscale and attempt to compare their scores to age- and sex-stratified data.

  17. Refactoring and Its Benefits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veerraju, R. P. S. P.; Rao, A. Srinivasa; Murali, G.

    2010-10-26

    Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. It improves internal code structure without altering its external functionality by transforming functions and rethinking algorithms. It is an iterative process. Refactoring include reducing scope, replacing complex instructions with simpler or built-in instructions, and combining multiple statements into one statement. By transforming the code with refactoring techniques it will be faster to change, execute, and download. It is an excellent best practice to adopt for programmers wanting to improve their productivity. Refactoring is similar to things like performance optimizations,more » which are also behavior- preserving transformations. It also helps us find bugs when we are trying to fix a bug in difficult-to-understand code. By cleaning things up, we make it easier to expose the bug. Refactoring improves the quality of application design and implementation. In general, three cases concerning refactoring. Iterative refactoring, Refactoring when is necessary, Not refactor.Mr. Martin Fowler identifies four key reasons to refractor. Refactoring improves the design of software, makes software easier to understand, helps us find bugs and also helps in executing the program faster. There is an additional benefit of refactoring. It changes the way a developer thinks about the implementation when not refactoring. There are the three types of refactorings. 1) Code refactoring: It often referred to simply as refactoring. This is the refactoring of programming source code. 2) Database refactoring: It is a simple change to a database schema that improves its design while retaining both its behavioral and informational semantics. 3) User interface (UI) refactoring: It is a simple change to the UI which retains its semantics. Finally, we conclude the benefits of Refactoring are: Improves the design of software, Makes software easier to understand, Software gets cleaned up and Helps us to find bugs and Helps us to program faster.« less

  18. Quantitative Image Quality and Histogram-Based Evaluations of an Iterative Reconstruction Algorithm at Low-to-Ultralow Radiation Dose Levels: A Phantom Study in Chest CT

    PubMed Central

    Lee, Ki Baek

    2018-01-01

    Objective To describe the quantitative image quality and histogram-based evaluation of an iterative reconstruction (IR) algorithm in chest computed tomography (CT) scans at low-to-ultralow CT radiation dose levels. Materials and Methods In an adult anthropomorphic phantom, chest CT scans were performed with 128-section dual-source CT at 70, 80, 100, 120, and 140 kVp, and the reference (3.4 mGy in volume CT Dose Index [CTDIvol]), 30%-, 60%-, and 90%-reduced radiation dose levels (2.4, 1.4, and 0.3 mGy). The CT images were reconstructed by using filtered back projection (FBP) algorithms and IR algorithm with strengths 1, 3, and 5. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were statistically compared between different dose levels, tube voltages, and reconstruction algorithms. Moreover, histograms of subtraction images before and after standardization in x- and y-axes were visually compared. Results Compared with FBP images, IR images with strengths 1, 3, and 5 demonstrated image noise reduction up to 49.1%, SNR increase up to 100.7%, and CNR increase up to 67.3%. Noteworthy image quality degradations on IR images including a 184.9% increase in image noise, 63.0% decrease in SNR, and 51.3% decrease in CNR, and were shown between 60% and 90% reduced levels of radiation dose (p < 0.0001). Subtraction histograms between FBP and IR images showed progressively increased dispersion with increased IR strength and increased dose reduction. After standardization, the histograms appeared deviated and ragged between FBP images and IR images with strength 3 or 5, but almost normally-distributed between FBP images and IR images with strength 1. Conclusion The IR algorithm may be used to save radiation doses without substantial image quality degradation in chest CT scanning of the adult anthropomorphic phantom, down to approximately 1.4 mGy in CTDIvol (60% reduced dose). PMID:29354008

  19. The Healthcare Complaints Analysis Tool: development and reliability testing of a method for service monitoring and organisational learning

    PubMed Central

    Gillespie, Alex; Reader, Tom W

    2016-01-01

    Background Letters of complaint written by patients and their advocates reporting poor healthcare experiences represent an under-used data source. The lack of a method for extracting reliable data from these heterogeneous letters hinders their use for monitoring and learning. To address this gap, we report on the development and reliability testing of the Healthcare Complaints Analysis Tool (HCAT). Methods HCAT was developed from a taxonomy of healthcare complaints reported in a previously published systematic review. It introduces the novel idea that complaints should be analysed in terms of severity. Recruiting three groups of educated lay participants (n=58, n=58, n=55), we refined the taxonomy through three iterations of discriminant content validity testing. We then supplemented this refined taxonomy with explicit coding procedures for seven problem categories (each with four levels of severity), stage of care and harm. These combined elements were further refined through iterative coding of a UK national sample of healthcare complaints (n= 25, n=80, n=137, n=839). To assess reliability and accuracy for the resultant tool, 14 educated lay participants coded a referent sample of 125 healthcare complaints. Results The seven HCAT problem categories (quality, safety, environment, institutional processes, listening, communication, and respect and patient rights) were found to be conceptually distinct. On average, raters identified 1.94 problems (SD=0.26) per complaint letter. Coders exhibited substantial reliability in identifying problems at four levels of severity; moderate and substantial reliability in identifying stages of care (except for ‘discharge/transfer’ that was only fairly reliable) and substantial reliability in identifying overall harm. Conclusions HCAT is not only the first reliable tool for coding complaints, it is the first tool to measure the severity of complaints. It facilitates service monitoring and organisational learning and it enables future research examining whether healthcare complaints are a leading indicator of poor service outcomes. HCAT is freely available to download and use. PMID:26740496

  20. Optimizing the current ramp-up phase for the hybrid ITER scenario

    NASA Astrophysics Data System (ADS)

    Hogeweij, G. M. D.; Artaud, J.-F.; Casper, T. A.; Citrin, J.; Imbeaux, F.; Köchl, F.; Litaudon, X.; Voitsekhovitch, I.; the ITM-TF ITER Scenario Modelling Group

    2013-01-01

    The current ramp-up phase for the ITER hybrid scenario is analysed with the CRONOS integrated modelling suite. The simulations presented in this paper show that the heating systems available at ITER allow, within the operational limits, the attainment of a hybrid q profile at the end of the current ramp-up. A reference ramp-up scenario is reached by a combination of NBI, ECCD (UPL) and LHCD. A heating scheme with only NBI and ECCD can also reach the target q profile; however, LHCD can play a crucial role in reducing the flux consumption during the ramp-up phase. The optimum heating scheme depends on the chosen transport model, and on assumptions of parameters like ne peaking, edge Te,i and Zeff. The sensitivity of the current diffusion on parameters that are not easily controlled, shows that development of real-time control is important to reach the target q profile. A first step in that direction has been indicated in this paper. Minimizing resistive flux consumption and optimizing the q profile turn out to be conflicting requirements. A trade-off between these two requirements has to be made. In this paper it is shown that fast current ramp with L-mode current overshoot is at the one extreme, i.e. the optimum q profile at the cost of increased resistive flux consumption, whereas early H-mode transition is at the other extreme.

Top