Sample records for iterative dimension reduction

  1. An iterated Laplacian based semi-supervised dimensionality reduction for classification of breast cancer on ultrasound images.

    PubMed

    Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua

    2014-01-01

    The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.

  2. Radar cross-section reduction based on an iterative fast Fourier transform optimized metasurface

    NASA Astrophysics Data System (ADS)

    Song, Yi-Chuan; Ding, Jun; Guo, Chen-Jiang; Ren, Yu-Hui; Zhang, Jia-Kai

    2016-07-01

    A novel polarization insensitive metasurface with over 25 dB monostatic radar cross-section (RCS) reduction is introduced. The proposed metasurface is comprised of carefully arranged unit cells with spatially varied dimension, which enables approximate uniform diffusion of incoming electromagnetic (EM) energy and reduces the threat from bistatic radar system. An iterative fast Fourier transform (FFT) method for conventional antenna array pattern synthesis is innovatively applied to find the best unit cell geometry parameter arrangement. Finally, a metasurface sample is fabricated and tested to validate RCS reduction behavior predicted by full wave simulation software Ansys HFSSTM and marvelous agreement is observed.

  3. On iterative processes in the Krylov-Sonneveld subspaces

    NASA Astrophysics Data System (ADS)

    Ilin, Valery P.

    2016-10-01

    The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.

  4. Tensor sufficient dimension reduction

    PubMed Central

    Zhong, Wenxuan; Xing, Xin; Suslick, Kenneth

    2015-01-01

    Tensor is a multiway array. With the rapid development of science and technology in the past decades, large amount of tensor observations are routinely collected, processed, and stored in many scientific researches and commercial activities nowadays. The colorimetric sensor array (CSA) data is such an example. Driven by the need to address data analysis challenges that arise in CSA data, we propose a tensor dimension reduction model, a model assuming the nonlinear dependence between a response and a projection of all the tensor predictors. The tensor dimension reduction models are estimated in a sequential iterative fashion. The proposed method is applied to a CSA data collected for 150 pathogenic bacteria coming from 10 bacterial species and 14 bacteria from one control species. Empirical performance demonstrates that our proposed method can greatly improve the sensitivity and specificity of the CSA technique. PMID:26594304

  5. Unsteady Flow Simulation: A Numerical Challenge

    DTIC Science & Technology

    2003-03-01

    drive to convergence the numerical unsteady term. The time marching procedure is based on the approximate implicit Newton method for systems of non...computed through analytical derivatives of S. The linear system stemming from equation (3) is solved at each integration step by the same iterative method...significant reduction of memory usage, thanks to the reduced dimensions of the linear system matrix during the implicit marching of the solution. The

  6. Hierarchical optimization for neutron scattering problems

    DOE PAGES

    Bao, Feng; Archibald, Rick; Bansal, Dipanshu; ...

    2016-03-14

    In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.

  7. Hierarchical optimization for neutron scattering problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Feng; Archibald, Rick; Bansal, Dipanshu

    In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.

  8. The staircase method: integrals for periodic reductions of integrable lattice equations

    NASA Astrophysics Data System (ADS)

    van der Kamp, Peter H.; Quispel, G. R. W.

    2010-11-01

    We show, in full generality, that the staircase method (Papageorgiou et al 1990 Phys. Lett. A 147 106-14, Quispel et al 1991 Physica A 173 243-66) provides integrals for mappings, and correspondences, obtained as traveling wave reductions of (systems of) integrable partial difference equations. We apply the staircase method to a variety of equations, including the Korteweg-De Vries equation, the five-point Bruschi-Calogero-Droghei equation, the quotient-difference (QD)-algorithm and the Boussinesq system. We show that, in all these cases, if the staircase method provides r integrals for an n-dimensional mapping, with 2r, then one can introduce q <= 2r variables, which reduce the dimension of the mapping from n to q. These dimension-reducing variables are obtained as joint invariants of k-symmetries of the mappings. Our results support the idea that often the staircase method provides sufficiently many integrals for the periodic reductions of integrable lattice equations to be completely integrable. We also study reductions on other quad-graphs than the regular {\\ Z}^2 lattice, and we prove linear growth of the multi-valuedness of iterates of high-dimensional correspondences obtained as reductions of the QD-algorithm.

  9. Surface-structured diffuser by iterative down-size molding with glass sintering technology.

    PubMed

    Lee, Xuan-Hao; Tsai, Jung-Lin; Ma, Shih-Hsin; Sun, Ching-Cherng

    2012-03-12

    In this paper, a down-size sintering scheme for making high-performance diffusers with micro structure to perform beam shaping is presented and demonstrated. By using down-size sintering method, a surface-structure film is designed and fabricated to verify the feasibility of the sintering technology, in which up to 1/8 dimension reduction has been achieved. Besides, a special impressing technology has been applied to fabricate diffuser film with various materials and the transmission efficiency is as high as 85% and above. By introducing the diffuser into possible lighting applications, the diffusers have been shown high performance in glare reduction, beam shaping and energy saving.

  10. Diverse Power Iteration Embeddings and Its Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang H.; Yoo S.; Yu, D.

    2014-12-14

    Abstract—Spectral Embedding is one of the most effective dimension reduction algorithms in data mining. However, its computation complexity has to be mitigated in order to apply it for real-world large scale data analysis. Many researches have been focusing on developing approximate spectral embeddings which are more efficient, but meanwhile far less effective. This paper proposes Diverse Power Iteration Embeddings (DPIE), which not only retains the similar efficiency of power iteration methods but also produces a series of diverse and more effective embedding vectors. We test this novel method by applying it to various data mining applications (e.g. clustering, anomaly detectionmore » and feature selection) and evaluating their performance improvements. The experimental results show our proposed DPIE is more effective than popular spectral approximation methods, and obtains the similar quality of classic spectral embedding derived from eigen-decompositions. Moreover it is extremely fast on big data applications. For example in terms of clustering result, DPIE achieves as good as 95% of classic spectral clustering on the complex datasets but 4000+ times faster in limited memory environment.« less

  11. Improved Image Quality in Head and Neck CT Using a 3D Iterative Approach to Reduce Metal Artifact.

    PubMed

    Wuest, W; May, M S; Brand, M; Bayerl, N; Krauss, A; Uder, M; Lell, M

    2015-10-01

    Metal artifacts from dental fillings and other devices degrade image quality and may compromise the detection and evaluation of lesions in the oral cavity and oropharynx by CT. The aim of this study was to evaluate the effect of iterative metal artifact reduction on CT of the oral cavity and oropharynx. Data from 50 consecutive patients with metal artifacts from dental hardware were reconstructed with standard filtered back-projection, linear interpolation metal artifact reduction (LIMAR), and iterative metal artifact reduction. The image quality of sections that contained metal was analyzed for the severity of artifacts and diagnostic value. A total of 455 sections (mean ± standard deviation, 9.1 ± 4.1 sections per patient) contained metal and were evaluated with each reconstruction method. Sections without metal were not affected by the algorithms and demonstrated image quality identical to each other. Of these sections, 38% were considered nondiagnostic with filtered back-projection, 31% with LIMAR, and only 7% with iterative metal artifact reduction. Thirty-three percent of the sections had poor image quality with filtered back-projection, 46% with LIMAR, and 10% with iterative metal artifact reduction. Thirteen percent of the sections with filtered back-projection, 17% with LIMAR, and 22% with iterative metal artifact reduction were of moderate image quality, 16% of the sections with filtered back-projection, 5% with LIMAR, and 30% with iterative metal artifact reduction were of good image quality, and 1% of the sections with LIMAR and 31% with iterative metal artifact reduction were of excellent image quality. Iterative metal artifact reduction yields the highest image quality in comparison with filtered back-projection and linear interpolation metal artifact reduction in patients with metal hardware in the head and neck area. © 2015 by American Journal of Neuroradiology.

  12. A model reduction approach to numerical inversion for a parabolic partial differential equation

    NASA Astrophysics Data System (ADS)

    Borcea, Liliana; Druskin, Vladimir; Mamonov, Alexander V.; Zaslavsky, Mikhail

    2014-12-01

    We propose a novel numerical inversion algorithm for the coefficients of parabolic partial differential equations, based on model reduction. The study is motivated by the application of controlled source electromagnetic exploration, where the unknown is the subsurface electrical resistivity and the data are time resolved surface measurements of the magnetic field. The algorithm presented in this paper considers inversion in one and two dimensions. The reduced model is obtained with rational interpolation in the frequency (Laplace) domain and a rational Krylov subspace projection method. It amounts to a nonlinear mapping from the function space of the unknown resistivity to the small dimensional space of the parameters of the reduced model. We use this mapping as a nonlinear preconditioner for the Gauss-Newton iterative solution of the inverse problem. The advantage of the inversion algorithm is twofold. First, the nonlinear preconditioner resolves most of the nonlinearity of the problem. Thus the iterations are less likely to get stuck in local minima and the convergence is fast. Second, the inversion is computationally efficient because it avoids repeated accurate simulations of the time-domain response. We study the stability of the inversion algorithm for various rational Krylov subspaces, and assess its performance with numerical experiments.

  13. WavePacket: A Matlab package for numerical quantum dynamics.II: Open quantum systems, optimal control, and model reduction

    NASA Astrophysics Data System (ADS)

    Schmidt, Burkhard; Hartmann, Carsten

    2018-07-01

    WavePacket is an open-source program package for numeric simulations in quantum dynamics. It can solve time-independent or time-dependent linear Schrödinger and Liouville-von Neumann-equations in one or more dimensions. Also coupled equations can be treated, which allows, e.g., to simulate molecular quantum dynamics beyond the Born-Oppenheimer approximation. Optionally accounting for the interaction with external electric fields within the semi-classical dipole approximation, WavePacket can be used to simulate experiments involving tailored light pulses in photo-induced physics or chemistry. Being highly versatile and offering visualization of quantum dynamics 'on the fly', WavePacket is well suited for teaching or research projects in atomic, molecular and optical physics as well as in physical or theoretical chemistry. Building on the previous Part I [Comp. Phys. Comm. 213, 223-234 (2017)] which dealt with closed quantum systems and discrete variable representations, the present Part II focuses on the dynamics of open quantum systems, with Lindblad operators modeling dissipation and dephasing. This part also describes the WavePacket function for optimal control of quantum dynamics, building on rapid monotonically convergent iteration methods. Furthermore, two different approaches to dimension reduction implemented in WavePacket are documented here. In the first one, a balancing transformation based on the concepts of controllability and observability Gramians is used to identify states that are neither well controllable nor well observable. Those states are either truncated or averaged out. In the other approach, the H2-error for a given reduced dimensionality is minimized by H2 optimal model reduction techniques, utilizing a bilinear iterative rational Krylov algorithm. The present work describes the MATLAB version of WavePacket 5.3.0 which is hosted and further developed at the Sourceforge platform, where also extensive Wiki-documentation as well as numerous worked-out demonstration examples with animated graphics can be found.

  14. Iterative design of one- and two-dimensional FIR digital filters. [Finite duration Impulse Response

    NASA Technical Reports Server (NTRS)

    Suk, M.; Choi, K.; Algazi, V. R.

    1976-01-01

    The paper describes a new iterative technique for designing FIR (finite duration impulse response) digital filters using a frequency weighted least squares approximation. The technique is as easy to implement (via FFT) and as effective in two dimensions as in one dimension, and there are virtually no limitations on the class of filter frequency spectra approximated. An adaptive adjustment of the frequency weight to achieve other types of design approximation such as Chebyshev type design is discussed.

  15. An iterative method for systems of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1989-01-01

    An iterative algorithm for the efficient solution of systems of nonlinear hyperbolic equations is presented. Parallelism is evident at several levels. In the formation of the iteration, the equations are decoupled, thereby providing large grain parallelism. Parallelism may also be exploited within the solves for each equation. Convergence of the interation is established via a bounding function argument. Experimental results in two-dimensions are presented.

  16. Perturbation-iteration theory for analyzing microwave striplines

    NASA Technical Reports Server (NTRS)

    Kretch, B. E.

    1985-01-01

    A perturbation-iteration technique is presented for determining the propagation constant and characteristic impedance of an unshielded microstrip transmission line. The method converges to the correct solution with a few iterations at each frequency and is equivalent to a full wave analysis. The perturbation-iteration method gives a direct solution for the propagation constant without having to find the roots of a transcendental dispersion equation. The theory is presented in detail along with numerical results for the effective dielectric constant and characteristic impedance for a wide range of substrate dielectric constants, stripline dimensions, and frequencies.

  17. A Fractal Excursion.

    ERIC Educational Resources Information Center

    Camp, Dane R.

    1991-01-01

    After introducing the two-dimensional Koch curve, which is generated by simple recursions on an equilateral triangle, the process is extended to three dimensions with simple recursions on a regular tetrahedron. Included, for both fractal sequences, are iterative formulae, illustrations of the first several iterations, and a sample PASCAL program.…

  18. Randomized interpolative decomposition of separated representations

    NASA Astrophysics Data System (ADS)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  19. Quality in Inclusive and Noninclusive Infant and Toddler Classrooms

    ERIC Educational Resources Information Center

    Hestenes, Linda L.; Cassidy, Deborah J.; Hegde, Archana V.; Lower, Joanna K.

    2007-01-01

    The quality of care in infant and toddler classrooms was compared across inclusive (n=64) and noninclusive classrooms (n=400). Quality was measured using the Infant/Toddler Environment Rating Scale-Revised (ITERS-R). An exploratory and confirmatory factor analysis revealed four distinct dimensions of quality within the ITERS-R. Inclusive…

  20. A Modified Sparse Representation Method for Facial Expression Recognition.

    PubMed

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.

  1. A Modified Sparse Representation Method for Facial Expression Recognition

    PubMed Central

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878

  2. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    PubMed Central

    Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969

  3. Reflective Random Indexing and indirect inference: a scalable method for discovery of implicit connections.

    PubMed

    Cohen, Trevor; Schvaneveldt, Roger; Widdows, Dominic

    2010-04-01

    The discovery of implicit connections between terms that do not occur together in any scientific document underlies the model of literature-based knowledge discovery first proposed by Swanson. Corpus-derived statistical models of semantic distance such as Latent Semantic Analysis (LSA) have been evaluated previously as methods for the discovery of such implicit connections. However, LSA in particular is dependent on a computationally demanding method of dimension reduction as a means to obtain meaningful indirect inference, limiting its ability to scale to large text corpora. In this paper, we evaluate the ability of Random Indexing (RI), a scalable distributional model of word associations, to draw meaningful implicit relationships between terms in general and biomedical language. Proponents of this method have achieved comparable performance to LSA on several cognitive tasks while using a simpler and less computationally demanding method of dimension reduction than LSA employs. In this paper, we demonstrate that the original implementation of RI is ineffective at inferring meaningful indirect connections, and evaluate Reflective Random Indexing (RRI), an iterative variant of the method that is better able to perform indirect inference. RRI is shown to lead to more clearly related indirect connections and to outperform existing RI implementations in the prediction of future direct co-occurrence in the MEDLINE corpus. 2009 Elsevier Inc. All rights reserved.

  4. An iterative technique to stabilize a linear time invariant multivariable system with output feedback

    NASA Technical Reports Server (NTRS)

    Sankaran, V.

    1974-01-01

    An iterative procedure for determining the constant gain matrix that will stabilize a linear constant multivariable system using output feedback is described. The use of this procedure avoids the transformation of variables which is required in other procedures. For the case in which the product of the output and input vector dimensions is greater than the number of states of the plant, general solution is given. In the case in which the states exceed the product of input and output vector dimensions, a least square solution which may not be stable in all cases is presented. The results are illustrated with examples.

  5. A novel color image encryption scheme using alternate chaotic mapping structure

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Zhao, Yuanyuan; Zhang, Huili; Guo, Kang

    2016-07-01

    This paper proposes an color image encryption algorithm using alternate chaotic mapping structure. Initially, we use the R, G and B components to form a matrix. Then one-dimension logistic and two-dimension logistic mapping is used to generate a chaotic matrix, then iterate two chaotic mappings alternately to permute the matrix. For every iteration, XOR operation is adopted to encrypt plain-image matrix, then make further transformation to diffuse the matrix. At last, the encrypted color image is obtained from the confused matrix. Theoretical analysis and experimental results has proved the cryptosystem is secure and practical, and it is suitable for encrypting color images.

  6. Adaptive Statistical Iterative Reconstruction-V Versus Adaptive Statistical Iterative Reconstruction: Impact on Dose Reduction and Image Quality in Body Computed Tomography.

    PubMed

    Gatti, Marco; Marchisio, Filippo; Fronda, Marco; Rampado, Osvaldo; Faletti, Riccardo; Bergamasco, Laura; Ropolo, Roberto; Fonio, Paolo

    The aim of this study was to evaluate the impact on dose reduction and image quality of the new iterative reconstruction technique: adaptive statistical iterative reconstruction (ASIR-V). Fifty consecutive oncologic patients acted as case controls undergoing during their follow-up a computed tomography scan both with ASIR and ASIR-V. Each study was analyzed in a double-blinded fashion by 2 radiologists. Both quantitative and qualitative analyses of image quality were conducted. Computed tomography scanner radiation output was 38% (29%-45%) lower (P < 0.0001) for the ASIR-V examinations than for the ASIR ones. The quantitative image noise was significantly lower (P < 0.0001) for ASIR-V. Adaptive statistical iterative reconstruction-V had a higher performance for the subjective image noise (P = 0.01 for 5 mm and P = 0.009 for 1.25 mm), the other parameters (image sharpness, diagnostic acceptability, and overall image quality) being similar (P > 0.05). Adaptive statistical iterative reconstruction-V is a new iterative reconstruction technique that has the potential to provide image quality equal to or greater than ASIR, with a dose reduction around 40%.

  7. Why and how Mastering an Incremental and Iterative Software Development Process

    NASA Astrophysics Data System (ADS)

    Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe

    2004-06-01

    One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.

  8. Nonlinear Network Description for Many-Body Quantum Systems in Continuous Space

    NASA Astrophysics Data System (ADS)

    Ruggeri, Michele; Moroni, Saverio; Holzmann, Markus

    2018-05-01

    We show that the recently introduced iterative backflow wave function can be interpreted as a general neural network in continuum space with nonlinear functions in the hidden units. Using this wave function in variational Monte Carlo simulations of liquid 4He in two and three dimensions, we typically find a tenfold increase in accuracy over currently used wave functions. Furthermore, subsequent stages of the iteration procedure define a set of increasingly good wave functions, each with its own variational energy and variance of the local energy: extrapolation to zero variance gives energies in close agreement with the exact values. For two dimensional 4He, we also show that the iterative backflow wave function can describe both the liquid and the solid phase with the same functional form—a feature shared with the shadow wave function, but now joined by much higher accuracy. We also achieve significant progress for liquid 3He in three dimensions, improving previous variational and fixed-node energies.

  9. Effective dimensional reduction algorithm for eigenvalue problems for thin elastic structures: A paradigm in three dimensions

    PubMed Central

    Ovtchinnikov, Evgueni E.; Xanthis, Leonidas S.

    2000-01-01

    We present a methodology for the efficient numerical solution of eigenvalue problems of full three-dimensional elasticity for thin elastic structures, such as shells, plates and rods of arbitrary geometry, discretized by the finite element method. Such problems are solved by iterative methods, which, however, are known to suffer from slow convergence or even convergence failure, when the thickness is small. In this paper we show an effective way of resolving this difficulty by invoking a special preconditioning technique associated with the effective dimensional reduction algorithm (EDRA). As an example, we present an algorithm for computing the minimal eigenvalue of a thin elastic plate and we show both theoretically and numerically that it is robust with respect to both the thickness and discretization parameters, i.e. the convergence does not deteriorate with diminishing thickness or mesh refinement. This robustness is sine qua non for the efficient computation of large-scale eigenvalue problems for thin elastic structures. PMID:10655469

  10. SparseCT: interrupted-beam acquisition and sparse reconstruction for radiation dose reduction

    NASA Astrophysics Data System (ADS)

    Koesters, Thomas; Knoll, Florian; Sodickson, Aaron; Sodickson, Daniel K.; Otazo, Ricardo

    2017-03-01

    State-of-the-art low-dose CT methods reduce the x-ray tube current and use iterative reconstruction methods to denoise the resulting images. However, due to compromises between denoising and image quality, only moderate dose reductions up to 30-40% are accepted in clinical practice. An alternative approach is to reduce the number of x-ray projections and use compressed sensing to reconstruct the full-tube-current undersampled data. This idea was recognized in the early days of compressed sensing and proposals for CT dose reduction appeared soon afterwards. However, no practical means of undersampling has yet been demonstrated in the challenging environment of a rapidly rotating CT gantry. In this work, we propose a moving multislit collimator as a practical incoherent undersampling scheme for compressed sensing CT and evaluate its application for radiation dose reduction. The proposed collimator is composed of narrow slits and moves linearly along the slice dimension (z), to interrupt the incident beam in different slices for each x-ray tube angle (θ). The reduced projection dataset is then reconstructed using a sparse approach, where 3D image gradients are employed to enforce sparsity. The effects of the collimator slits on the beam profile were measured and represented as a continuous slice profile. SparseCT was tested using retrospective undersampling and compared against commercial current-reduction techniques on phantoms and in vivo studies. Initial results suggest that SparseCT may enable higher performance than current-reduction, particularly for high dose reduction factors.

  11. Robust iterative learning control for multi-phase batch processes: an average dwell-time method with 2D convergence indexes

    NASA Astrophysics Data System (ADS)

    Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong

    2018-01-01

    In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.

  12. Evaluating the effect of increased pitch, iterative reconstruction and dual source CT on dose reduction and image quality.

    PubMed

    Gariani, Joanna; Martin, Steve P; Botsikas, Diomidis; Becker, Christoph D; Montet, Xavier

    2018-06-14

    To compare radiation dose and image quality of thoracoabdominal scans obtained with a high-pitch protocol (pitch 3.2) and iterative reconstruction (Sinogram Affirmed Iterative Reconstruction) in comparison to standard pitch reconstructed with filtered back projection (FBP) using dual source CT. 114 CT scans (Somatom Definition Flash, Siemens Healthineers, Erlangen, Germany), 39 thoracic scans, 54 thoracoabdominal scans and 21 abdominal scans were performed. Analysis of three protocols was undertaken; pitch of 1 reconstructed with FBP, pitch of 3.2 reconstructed with SAFIRE, pitch of 3.2 with stellar detectors reconstructed with SAFIRE. Objective and subjective image analysis were performed. Dose differences of the protocols used were compared. Dose was reduced when comparing scans with a pitch of 1 reconstructed with FBP to high-pitch scans with a pitch of 3.2 reconstructed with SAFIRE with a reduction of volume CT dose index of 75% for thoracic scans, 64% for thoracoabdominal scans and 67% for abdominal scans. There was a further reduction after the implementation of stellar detectors reflected in a reduction of 36% of the dose-length product for thoracic scans. This was not at the detriment of image quality, contrast-to-noise ratio, signal-to-noise ratio and the qualitative image analysis revealed a superior image quality in the high-pitch protocols. The combination of a high pitch protocol with iterative reconstruction allows significant dose reduction in routine chest and abdominal scans whilst maintaining or improving diagnostic image quality, with a further reduction in thoracic scans with stellar detectors. Advances in knowledge: High pitch imaging with iterative reconstruction is a tool that can be used to reduce dose without sacrificing image quality.

  13. Solution of the symmetric eigenproblem AX=lambda BX by delayed division

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.; Bains, N. J. C.

    1986-01-01

    Delayed division is an iterative method for solving the linear eigenvalue problem AX = lambda BX for a limited number of small eigenvalues and their corresponding eigenvectors. The distinctive feature of the method is the reduction of the problem to an approximate triangular form by systematically dropping quadratic terms in the eigenvalue lambda. The report describes the pivoting strategy in the reduction and the method for preserving symmetry in submatrices at each reduction step. Along with the approximate triangular reduction, the report extends some techniques used in the method of inverse subspace iteration. Examples are included for problems of varying complexity.

  14. Implicit methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Yoon, S.; Kwak, D.

    1990-01-01

    Numerical solutions of the Navier-Stokes equations using explicit schemes can be obtained at the expense of efficiency. Conventional implicit methods which often achieve fast convergence rates suffer high cost per iteration. A new implicit scheme based on lower-upper factorization and symmetric Gauss-Seidel relaxation offers very low cost per iteration as well as fast convergence. High efficiency is achieved by accomplishing the complete vectorizability of the algorithm on oblique planes of sweep in three dimensions.

  15. Methodology for Sensitivity Analysis, Approximate Analysis, and Design Optimization in CFD for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1996-01-01

    An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.

  16. Stochastic dynamic analysis of marine risers considering Gaussian system uncertainties

    NASA Astrophysics Data System (ADS)

    Ni, Pinghe; Li, Jun; Hao, Hong; Xia, Yong

    2018-03-01

    This paper performs the stochastic dynamic response analysis of marine risers with material uncertainties, i.e. in the mass density and elastic modulus, by using Stochastic Finite Element Method (SFEM) and model reduction technique. These uncertainties are assumed having Gaussian distributions. The random mass density and elastic modulus are represented by using the Karhunen-Loève (KL) expansion. The Polynomial Chaos (PC) expansion is adopted to represent the vibration response because the covariance of the output is unknown. Model reduction based on the Iterated Improved Reduced System (IIRS) technique is applied to eliminate the PC coefficients of the slave degrees of freedom to reduce the dimension of the stochastic system. Monte Carlo Simulation (MCS) is conducted to obtain the reference response statistics. Two numerical examples are studied in this paper. The response statistics from the proposed approach are compared with those from MCS. It is noted that the computational time is significantly reduced while the accuracy is kept. The results demonstrate the efficiency of the proposed approach for stochastic dynamic response analysis of marine risers.

  17. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  18. Iterative Reconstruction Techniques in Abdominopelvic CT: Technical Concepts and Clinical Implementation.

    PubMed

    Patino, Manuel; Fuentes, Jorge M; Singh, Sarabjeet; Hahn, Peter F; Sahani, Dushyant V

    2015-07-01

    This article discusses the clinical challenge of low-radiation-dose examinations, the commonly used approaches for dose optimization, and their effect on image quality. We emphasize practical aspects of the different iterative reconstruction techniques, along with their benefits, pitfalls, and clinical implementation. The widespread use of CT has raised concerns about potential radiation risks, motivating diverse strategies to reduce the radiation dose associated with CT. CT manufacturers have developed alternative reconstruction algorithms intended to improve image quality on dose-optimized CT studies, mainly through noise and artifact reduction. Iterative reconstruction techniques take unique approaches to noise reduction and provide distinct strength levels or settings.

  19. Representation and alignment of sung queries for music information retrieval

    NASA Astrophysics Data System (ADS)

    Adams, Norman H.; Wakefield, Gregory H.

    2005-09-01

    The pursuit of robust and rapid query-by-humming systems, which search melodic databases using sung queries, is a common theme in music information retrieval. The retrieval aspect of this database problem has received considerable attention, whereas the front-end processing of sung queries and the data structure to represent melodies has been based on musical intuition and historical momentum. The present work explores three time series representations for sung queries: a sequence of notes, a ``smooth'' pitch contour, and a sequence of pitch histograms. The performance of the three representations is compared using a collection of naturally sung queries. It is found that the most robust performance is achieved by the representation with highest dimension, the smooth pitch contour, but that this representation presents a formidable computational burden. For all three representations, it is necessary to align the query and target in order to achieve robust performance. The computational cost of the alignment is quadratic, hence it is necessary to keep the dimension small for rapid retrieval. Accordingly, iterative deepening is employed to achieve both robust performance and rapid retrieval. Finally, the conventional iterative framework is expanded to adapt the alignment constraints based on previous iterations, further expediting retrieval without degrading performance.

  20. A greedy algorithm for species selection in dimension reduction of combustion chemistry

    NASA Astrophysics Data System (ADS)

    Hiremath, Varun; Ren, Zhuyin; Pope, Stephen B.

    2010-09-01

    Computational calculations of combustion problems involving large numbers of species and reactions with a detailed description of the chemistry can be very expensive. Numerous dimension reduction techniques have been developed in the past to reduce the computational cost. In this paper, we consider the rate controlled constrained-equilibrium (RCCE) dimension reduction method, in which a set of constrained species is specified. For a given number of constrained species, the 'optimal' set of constrained species is that which minimizes the dimension reduction error. The direct determination of the optimal set is computationally infeasible, and instead we present a greedy algorithm which aims at determining a 'good' set of constrained species; that is, one leading to near-minimal dimension reduction error. The partially-stirred reactor (PaSR) involving methane premixed combustion with chemistry described by the GRI-Mech 1.2 mechanism containing 31 species is used to test the algorithm. Results on dimension reduction errors for different sets of constrained species are presented to assess the effectiveness of the greedy algorithm. It is shown that the first four constrained species selected using the proposed greedy algorithm produce lower dimension reduction error than constraints on the major species: CH4, O2, CO2 and H2O. It is also shown that the first ten constrained species selected using the proposed greedy algorithm produce a non-increasing dimension reduction error with every additional constrained species; and produce the lowest dimension reduction error in many cases tested over a wide range of equivalence ratios, pressures and initial temperatures.

  1. Estimation of line dimensions in 3D direct laser writing lithography

    NASA Astrophysics Data System (ADS)

    Guney, M. G.; Fedder, G. K.

    2016-10-01

    Two photon polymerization (TPP) based 3D direct laser writing (3D-DLW) finds application in a wide range of research areas ranging from photonic and mechanical metamaterials to micro-devices. Most common structures are either single lines or formed by a set of interconnected lines as in the case of crystals. In order to increase the fidelity of these structures and reach the ultimate resolution, the laser power and scan speed used in the writing process should be chosen carefully. However, the optimization of these writing parameters is an iterative and time consuming process in the absence of a model for the estimation of line dimensions. To this end, we report a semi-empirical analytic model through simulations and fitting, and demonstrate that it can be used for estimating the line dimensions mostly within one standard deviation of the average values over a wide range of laser power and scan speed combinations. The model delimits the trend in onset of micro-explosions in the photoresist due to over-exposure and of low degree of conversion due to under-exposure. The model guides setting of high-fidelity and robust writing parameters of a photonic crystal structure without iteration and in close agreement with the estimated line dimensions. The proposed methodology is generalizable by adapting the model coefficients to any 3D-DLW setup and corresponding photoresist as a means to estimate the line dimensions for tuning the writing parameters.

  2. A new approach to the human muscle model.

    PubMed

    Baildon, R W; Chapman, A E

    1983-01-01

    Hill's (1938) two component muscle model is used as basis for digital computer simulation of human muscular contraction by means of an iterative process. The contractile (CC) and series elastic (SEC) components are lumped components of structures which produce and transmit torque to the external environment. The CC is described in angular terms along four dimensions as a series of non-planar torque-angle-angular velocity surfaces stacked on top of each other, each surface being appropriate to a given level of muscular activation. The SEC is described similarly along dimensions of torque, angular stretch, overall muscle angular displacement and activation. The iterative process introduces negligible error and allows the mechanical outcome of a variety of normal muscular contractions to be evaluated parsimoniously. The model allows analysis of many aspects of muscle behaviour as well as optimization studies. Definition of relevant relations should also allow reproduction and prediction of the outcome of contractions in individuals.

  3. Reducing the latency of the Fractal Iterative Method to half an iteration

    NASA Astrophysics Data System (ADS)

    Béchet, Clémentine; Tallon, Michel

    2013-12-01

    The fractal iterative method for atmospheric tomography (FRiM-3D) has been introduced to solve the wavefront reconstruction at the dimensions of an ELT with a low-computational cost. Previous studies reported the requirement of only 3 iterations of the algorithm in order to provide the best adaptive optics (AO) performance. Nevertheless, any iterative method in adaptive optics suffer from the intrinsic latency induced by the fact that one iteration can start only once the previous one is completed. Iterations hardly match the low-latency requirement of the AO real-time computer. We present here a new approach to avoid iterations in the computation of the commands with FRiM-3D, thus allowing low-latency AO response even at the scale of the European ELT (E-ELT). The method highlights the importance of "warm-start" strategy in adaptive optics. To our knowledge, this particular way to use the "warm-start" has not been reported before. Futhermore, removing the requirement of iterating to compute the commands, the computational cost of the reconstruction with FRiM-3D can be simplified and at least reduced to half the computational cost of a classical iteration. Thanks to simulations of both single-conjugate and multi-conjugate AO for the E-ELT,with FRiM-3D on Octopus ESO simulator, we demonstrate the benefit of this approach. We finally enhance the robustness of this new implementation with respect to increasing measurement noise, wind speed and even modeling errors.

  4. An efficient iteration strategy for the solution of the Euler equations

    NASA Technical Reports Server (NTRS)

    Walters, R. W.; Dwoyer, D. L.

    1985-01-01

    A line Gauss-Seidel (LGS) relaxation algorithm in conjunction with a one-parameter family of upwind discretizations of the Euler equations in two-dimensions is described. The basic algorithm has the property that convergence to the steady-state is quadratic for fully supersonic flows and linear otherwise. This is in contrast to the block ADI methods (either central or upwind differenced) and the upwind biased relaxation schemes, all of which converge linearly, independent of the flow regime. Moreover, the algorithm presented here is easily enhanced to detect regions of subsonic flow embedded in supersonic flow. This allows marching by lines in the supersonic regions, converging each line quadratically, and iterating in the subsonic regions, thus yielding a very efficient iteration strategy. Numerical results are presented for two-dimensional supersonic and transonic flows containing both oblique and normal shock waves which confirm the efficiency of the iteration strategy.

  5. Embedded sparse representation of fMRI data via group-wise dictionary optimization

    NASA Astrophysics Data System (ADS)

    Zhu, Dajiang; Lin, Binbin; Faskowitz, Joshua; Ye, Jieping; Thompson, Paul M.

    2016-03-01

    Sparse learning enables dimension reduction and efficient modeling of high dimensional signals and images, but it may need to be tailored to best suit specific applications and datasets. Here we used sparse learning to efficiently represent functional magnetic resonance imaging (fMRI) data from the human brain. We propose a novel embedded sparse representation (ESR), to identify the most consistent dictionary atoms across different brain datasets via an iterative group-wise dictionary optimization procedure. In this framework, we introduced additional criteria to make the learned dictionary atoms more consistent across different subjects. We successfully identified four common dictionary atoms that follow the external task stimuli with very high accuracy. After projecting the corresponding coefficient vectors back into the 3-D brain volume space, the spatial patterns are also consistent with traditional fMRI analysis results. Our framework reveals common features of brain activation in a population, as a new, efficient fMRI analysis method.

  6. Dose reduction with adaptive statistical iterative reconstruction for paediatric CT: phantom study and clinical experience on chest and abdomen CT.

    PubMed

    Gay, F; Pavia, Y; Pierrat, N; Lasalle, S; Neuenschwander, S; Brisse, H J

    2014-01-01

    To assess the benefit and limits of iterative reconstruction of paediatric chest and abdominal computed tomography (CT). The study compared adaptive statistical iterative reconstruction (ASIR) with filtered back projection (FBP) on 64-channel MDCT. A phantom study was first performed using variable tube potential, tube current and ASIR settings. The assessed image quality indices were the signal-to-noise ratio (SNR), the noise power spectrum, low contrast detectability (LCD) and spatial resolution. A clinical retrospective study of 26 children (M:F = 14/12, mean age: 4 years, range: 1-9 years) was secondarily performed allowing comparison of 18 chest and 14 abdominal CT pairs, one with a routine CT dose and FBP reconstruction, and the other with 30 % lower dose and 40 % ASIR reconstruction. Two radiologists independently compared the images for overall image quality, noise, sharpness and artefacts, and measured image noise. The phantom study demonstrated a significant increase in SNR without impairment of the LCD or spatial resolution, except for tube current values below 30-50 mA. On clinical images, no significant difference was observed between FBP and reduced dose ASIR images. Iterative reconstruction allows at least 30 % dose reduction in paediatric chest and abdominal CT, without impairment of image quality. • Iterative reconstruction helps lower radiation exposure levels in children undergoing CT. • Adaptive statistical iterative reconstruction (ASIR) significantly increases SNR without impairing spatial resolution. • For abdomen and chest CT, ASIR allows at least a 30 % dose reduction.

  7. Computed Tomography Imaging of a Hip Prosthesis Using Iterative Model-Based Reconstruction and Orthopaedic Metal Artefact Reduction: A Quantitative Analysis.

    PubMed

    Wellenberg, Ruud H H; Boomsma, Martijn F; van Osch, Jochen A C; Vlassenbroek, Alain; Milles, Julien; Edens, Mireille A; Streekstra, Geert J; Slump, Cornelis H; Maas, Mario

    To quantify the combined use of iterative model-based reconstruction (IMR) and orthopaedic metal artefact reduction (O-MAR) in reducing metal artefacts and improving image quality in a total hip arthroplasty phantom. Scans acquired at several dose levels and kVps were reconstructed with filtered back-projection (FBP), iterative reconstruction (iDose) and IMR, with and without O-MAR. Computed tomography (CT) numbers, noise levels, signal-to-noise-ratios and contrast-to-noise-ratios were analysed. Iterative model-based reconstruction results in overall improved image quality compared to iDose and FBP (P < 0.001). Orthopaedic metal artefact reduction is most effective in reducing severe metal artefacts improving CT number accuracy by 50%, 60%, and 63% (P < 0.05) and reducing noise by 1%, 62%, and 85% (P < 0.001) whereas improving signal-to-noise-ratios by 27%, 47%, and 46% (P < 0.001) and contrast-to-noise-ratios by 16%, 25%, and 19% (P < 0.001) with FBP, iDose, and IMR, respectively. The combined use of IMR and O-MAR strongly improves overall image quality and strongly reduces metal artefacts in the CT imaging of a total hip arthroplasty phantom.

  8. TH-CD-207A-07: Prediction of High Dimensional State Subject to Respiratory Motion: A Manifold Learning Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  9. Prospective ECG-Triggered Coronary CT Angiography: Clinical Value of Noise-Based Tube Current Reduction Method with Iterative Reconstruction

    PubMed Central

    Shen, Junlin; Du, Xiangying; Guo, Daode; Cao, Lizhen; Gao, Yan; Yang, Qi; Li, Pengyu; Liu, Jiabin; Li, Kuncheng

    2013-01-01

    Objectives To evaluate the clinical value of noise-based tube current reduction method with iterative reconstruction for obtaining consistent image quality with dose optimization in prospective electrocardiogram (ECG)-triggered coronary CT angiography (CCTA). Materials and Methods We performed a prospective randomized study evaluating 338 patients undergoing CCTA with prospective ECG-triggering. Patients were randomly assigned to fixed tube current with filtered back projection (Group 1, n = 113), noise-based tube current with filtered back projection (Group 2, n = 109) or with iterative reconstruction (Group 3, n = 116). Tube voltage was fixed at 120 kV. Qualitative image quality was rated on a 5-point scale (1 = impaired, to 5 = excellent, with 3–5 defined as diagnostic). Image noise and signal intensity were measured; signal-to-noise ratio was calculated; radiation dose parameters were recorded. Statistical analyses included one-way analysis of variance, chi-square test, Kruskal-Wallis test and multivariable linear regression. Results Image noise was maintained at the target value of 35HU with small interquartile range for Group 2 (35.00–35.03HU) and Group 3 (34.99–35.02HU), while from 28.73 to 37.87HU for Group 1. All images in the three groups were acceptable for diagnosis. A relative 20% and 51% reduction in effective dose for Group 2 (2.9 mSv) and Group 3 (1.8 mSv) were achieved compared with Group 1 (3.7 mSv). After adjustment for scan characteristics, iterative reconstruction was associated with 26% reduction in effective dose. Conclusion Noise-based tube current reduction method with iterative reconstruction maintains image noise precisely at the desired level and achieves consistent image quality. Meanwhile, effective dose can be reduced by more than 50%. PMID:23741444

  10. Fixed-point Design of the Lattice-reduction-aided Iterative Detection and Decoding Receiver for Coded MIMO Systems

    DTIC Science & Technology

    2011-01-01

    reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions

  11. Compressed sensing and the reconstruction of ultrafast 2D NMR data: Principles and biomolecular applications.

    PubMed

    Shrot, Yoav; Frydman, Lucio

    2011-04-01

    A topic of active investigation in 2D NMR relates to the minimum number of scans required for acquiring this kind of spectra, particularly when these are dictated by sampling rather than by sensitivity considerations. Reductions in this minimum number of scans have been achieved by departing from the regular sampling used to monitor the indirect domain, and relying instead on non-uniform sampling and iterative reconstruction algorithms. Alternatively, so-called "ultrafast" methods can compress the minimum number of scans involved in 2D NMR all the way to a minimum number of one, by spatially encoding the indirect domain information and subsequently recovering it via oscillating field gradients. Given ultrafast NMR's simultaneous recording of the indirect- and direct-domain data, this experiment couples the spectral constraints of these orthogonal domains - often calling for the use of strong acquisition gradients and large filter widths to fulfill the desired bandwidth and resolution demands along all spectral dimensions. This study discusses a way to alleviate these demands, and thereby enhance the method's performance and applicability, by combining spatial encoding with iterative reconstruction approaches. Examples of these new principles are given based on the compressed-sensed reconstruction of biomolecular 2D HSQC ultrafast NMR data, an approach that we show enables a decrease of the gradient strengths demanded in this type of experiments by up to 80%. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Effective dimension reduction for sparse functional data

    PubMed Central

    YAO, F.; LEI, E.; WU, Y.

    2015-01-01

    Summary We propose a method of effective dimension reduction for functional data, emphasizing the sparse design where one observes only a few noisy and irregular measurements for some or all of the subjects. The proposed method borrows strength across the entire sample and provides a way to characterize the effective dimension reduction space, via functional cumulative slicing. Our theoretical study reveals a bias-variance trade-off associated with the regularizing truncation and decaying structures of the predictor process and the effective dimension reduction space. A simulation study and an application illustrate the superior finite-sample performance of the method. PMID:26566293

  13. An efficient algorithm using matrix methods to solve wind tunnel force-balance equations

    NASA Technical Reports Server (NTRS)

    Smith, D. L.

    1972-01-01

    An iterative procedure applying matrix methods to accomplish an efficient algorithm for automatic computer reduction of wind-tunnel force-balance data has been developed. Balance equations are expressed in a matrix form that is convenient for storing balance sensitivities and interaction coefficient values for online or offline batch data reduction. The convergence of the iterative values to a unique solution of this system of equations is investigated, and it is shown that for balances which satisfy the criteria discussed, this type of solution does occur. Methods for making sensitivity adjustments and initial load effect considerations in wind-tunnel applications are also discussed, and the logic for determining the convergence accuracy limits for the iterative solution is given. This more efficient data reduction program is compared with the technique presently in use at the NASA Langley Research Center, and computational times on the order of one-third or less are demonstrated by use of this new program.

  14. Dynamic condensation of non-classically damped structures using the method of Maclaurin expansion of the frequency response function in Laplace domain

    NASA Astrophysics Data System (ADS)

    Esmaeilzad, Armin; Khanlari, Karen

    2018-07-01

    As the number of degrees of freedom (DOFs) in structural dynamic problems becomes larger, the analyzing complexity and CPU usage of computers increase drastically. Condensation (or reduction) method is an efficient technique to reduce the size of the full model or the dimension of the structural matrices by eliminating the unimportant DOFs. After the first presentation of condensation method by Guyan in 1965 for undamped structures, which ignores the dynamic effects of the mass term, various forms of dynamic condensation methods were presented to overcome this issue. Moreover, researchers have tried to expand the dynamic condensation method to non-classically damped structures. Dynamic reduction of such systems is far more complicated than undamped systems. The proposed non-iterative method in this paper is introduced as 'Maclaurin Expansion of the frequency response function in Laplace Domain' (MELD) applied for dynamic reduction of non-classically damped structures. The present approach is implemented in four numerical examples of 2D bending-shear-axial frames with various numbers of stories and spans and also a floating raft isolation system. The results of natural frequencies and dynamic responses of models are compared with each other before and after the dynamic reduction. It is shown that the result accuracy has acceptable convergence in both cases. In addition, it is indicated that the result of the proposed method is more accurate than the results of some other existing condensation methods.

  15. Measuring radiation dose in computed tomography using elliptic phantom and free-in-air, and evaluating iterative metal artifact reduction algorithm

    NASA Astrophysics Data System (ADS)

    Morgan, Ashraf

    The need for an accurate and reliable way for measuring patient dose in multi-row detector computed tomography (MDCT) has increased significantly. This research was focusing on the possibility of measuring CT dose in air to estimate Computed Tomography Dose Index (CTDI) for routine quality control purposes. New elliptic CTDI phantom that better represent human geometry was manufactured for investigating the effect of the subject shape on measured CTDI. Monte Carlo simulation was utilized in order to determine the dose distribution in comparison to the traditional cylindrical CTDI phantom. This research also investigated the effect of Siemens health care newly developed iMAR (iterative metal artifact reduction) algorithm, arthroplasty phantom was designed and manufactured that purpose. The design of new phantoms was part of the research as they mimic the human geometry more than the existing CTDI phantom. The standard CTDI phantom is a right cylinder that does not adequately represent the geometry of the majority of the patient population. Any dose reduction algorithm that is used during patient scan will not be utilized when scanning the CTDI phantom, so a better-designed phantom will allow the use of dose reduction algorithms when measuring dose, which leads to better dose estimation and/or better understanding of dose delivery. Doses from a standard CTDI phantom and the newly-designed phantoms were compared to doses measured in air. Iterative reconstruction is a promising technique in MDCT dose reduction and artifacts correction. Iterative reconstruction algorithms have been developed to address specific imaging tasks as is the case with Iterative Metal Artifact Reduction or iMAR which was developed by Siemens and is to be in use with the companys future computed tomography platform. The goal of iMAR is to reduce metal artifact when imaging patients with metal implants and recover CT number of tissues adjacent to the implant. This research evaluated iMAR capability of recovering CT numbers and reducing noise. Also, the use of iMAR should allow using lower tube voltage instead of 140 KVp which is used frequently to image patients with shoulder implants. The evaluations of image quality and dose reduction were carried out using an arthroplasty phantom.

  16. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    PubMed

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  17. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression

    PubMed Central

    2015-01-01

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437

  18. Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods

    PubMed Central

    Smith, David S.; Gore, John C.; Yankeelov, Thomas E.; Welch, E. Brian

    2012-01-01

    Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 40962 or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 10242 and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images. PMID:22481908

  19. Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods.

    PubMed

    Smith, David S; Gore, John C; Yankeelov, Thomas E; Welch, E Brian

    2012-01-01

    Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 4096(2) or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 1024(2) and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images.

  20. The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jason L. Wright; Milos Manic

    2010-05-01

    This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.

  1. A Review on Dimension Reduction

    PubMed Central

    Ma, Yanyuan; Zhu, Liping

    2013-01-01

    Summary Summarizing the effect of many covariates through a few linear combinations is an effective way of reducing covariate dimension and is the backbone of (sufficient) dimension reduction. Because the replacement of high-dimensional covariates by low-dimensional linear combinations is performed with a minimum assumption on the specific regression form, it enjoys attractive advantages as well as encounters unique challenges in comparison with the variable selection approach. We review the current literature of dimension reduction with an emphasis on the two most popular models, where the dimension reduction affects the conditional distribution and the conditional mean, respectively. We discuss various estimation and inference procedures in different levels of detail, with the intention of focusing on their underneath idea instead of technicalities. We also discuss some unsolved problems in this area for potential future research. PMID:23794782

  2. The PROactive instruments to measure physical activity in patients with chronic obstructive pulmonary disease

    PubMed Central

    Gimeno-Santos, Elena; Raste, Yogini; Demeyer, Heleen; Louvaris, Zafeiris; de Jong, Corina; Rabinovich, Roberto A.; Hopkinson, Nicholas S.; Polkey, Michael I.; Vogiatzis, Ioannis; Tabberer, Maggie; Dobbels, Fabienne; Ivanoff, Nathalie; de Boer, Willem I.; van der Molen, Thys; Kulich, Karoly; Serra, Ignasi; Basagaña, Xavier; Troosters, Thierry; Puhan, Milo A.; Karlsson, Niklas

    2015-01-01

    No current patient-centred instrument captures all dimensions of physical activity in chronic obstructive pulmonary disease (COPD). Our objective was item reduction and initial validation of two instruments to measure physical activity in COPD. Physical activity was assessed in a 6-week, randomised, two-way cross-over, multicentre study using PROactive draft questionnaires (daily and clinical visit versions) and two activity monitors. Item reduction followed an iterative process including classical and Rasch model analyses, and input from patients and clinical experts. 236 COPD patients from five European centres were included. Results indicated the concept of physical activity in COPD had two domains, labelled “amount” and “difficulty”. After item reduction, the daily PROactive instrument comprised nine items and the clinical visit contained 14. Both demonstrated good model fit (person separation index >0.7). Confirmatory factor analysis supported the bidimensional structure. Both instruments had good internal consistency (Cronbach's α>0.8), test–retest reliability (intraclass correlation coefficient ≥0.9) and exhibited moderate-to-high correlations (r>0.6) with related constructs and very low correlations (r<0.3) with unrelated constructs, providing evidence for construct validity. Daily and clinical visit “PROactive physical activity in COPD” instruments are hybrid tools combining a short patient-reported outcome questionnaire and two activity monitor variables which provide simple, valid and reliable measures of physical activity in COPD patients. PMID:26022965

  3. The PROactive instruments to measure physical activity in patients with chronic obstructive pulmonary disease.

    PubMed

    Gimeno-Santos, Elena; Raste, Yogini; Demeyer, Heleen; Louvaris, Zafeiris; de Jong, Corina; Rabinovich, Roberto A; Hopkinson, Nicholas S; Polkey, Michael I; Vogiatzis, Ioannis; Tabberer, Maggie; Dobbels, Fabienne; Ivanoff, Nathalie; de Boer, Willem I; van der Molen, Thys; Kulich, Karoly; Serra, Ignasi; Basagaña, Xavier; Troosters, Thierry; Puhan, Milo A; Karlsson, Niklas; Garcia-Aymerich, Judith

    2015-10-01

    No current patient-centred instrument captures all dimensions of physical activity in chronic obstructive pulmonary disease (COPD). Our objective was item reduction and initial validation of two instruments to measure physical activity in COPD.Physical activity was assessed in a 6-week, randomised, two-way cross-over, multicentre study using PROactive draft questionnaires (daily and clinical visit versions) and two activity monitors. Item reduction followed an iterative process including classical and Rasch model analyses, and input from patients and clinical experts.236 COPD patients from five European centres were included. Results indicated the concept of physical activity in COPD had two domains, labelled "amount" and "difficulty". After item reduction, the daily PROactive instrument comprised nine items and the clinical visit contained 14. Both demonstrated good model fit (person separation index >0.7). Confirmatory factor analysis supported the bidimensional structure. Both instruments had good internal consistency (Cronbach's α>0.8), test-retest reliability (intraclass correlation coefficient ≥0.9) and exhibited moderate-to-high correlations (r>0.6) with related constructs and very low correlations (r<0.3) with unrelated constructs, providing evidence for construct validity.Daily and clinical visit "PROactive physical activity in COPD" instruments are hybrid tools combining a short patient-reported outcome questionnaire and two activity monitor variables which provide simple, valid and reliable measures of physical activity in COPD patients. Copyright ©ERS 2015.

  4. Correlation dimension and phase space contraction via extreme value theory

    NASA Astrophysics Data System (ADS)

    Faranda, Davide; Vaienti, Sandro

    2018-04-01

    We show how to obtain theoretical and numerical estimates of correlation dimension and phase space contraction by using the extreme value theory. The maxima of suitable observables sampled along the trajectory of a chaotic dynamical system converge asymptotically to classical extreme value laws where: (i) the inverse of the scale parameter gives the correlation dimension and (ii) the extremal index is associated with the rate of phase space contraction for backward iteration, which in dimension 1 and 2, is closely related to the positive Lyapunov exponent and in higher dimensions is related to the metric entropy. We call it the Dynamical Extremal Index. Numerical estimates are straightforward to obtain as they imply just a simple fit to a univariate distribution. Numerical tests range from low dimensional maps, to generalized Henon maps and climate data. The estimates of the indicators are particularly robust even with relatively short time series.

  5. ELM mitigation studies in JET and implications for ITER

    NASA Astrophysics Data System (ADS)

    de La Luna, Elena

    2009-11-01

    Type I edge localized modes (ELMs) remain a serious concern for ITER because of the high transient heat and particle flux that can lead to rapid erosion of the divertor plates. This has stimulated worldwide research on exploration of different methods to avoid or at least mitigate the ELM energy loss while maintaining adequate confinement. ITER will require reliable ELM control over a wide range of operating conditions, including changes in the edge safety factor, therefore a suite of different techniques is highly desirable. In JET several techniques have been demonstrated for control the frequency and size of type I ELMs, including resonant perturbations of the edge magnetic field (RMP), ELM magnetic triggering by fast vertical movement of the plasma column (``vertical kicks'') and ELM pacing using pellet injection. In this paper we present results from recent dedicated experiments in JET focusing on integrating the different ELM mitigation methods into similar plasma scenarios. Plasma parameter scans provide comparison of the performance of the different techniques in terms of both the reduction in ELM size and on the impact of each control method on plasma confinement. The compatibility of different ELM mitigation schemes has also been investigated. The plasma response to RMP and vertical kicks during the ELM mitigation phase shares common features: the reduction in ELM size (up to a factor of 3) is accompanied by a reduction in pedestal pressure (mainly due to a loss of density) with only minor (< 10%) reduction of the stored energy. Interestingly, it has been found that the combined application of RMP and kicks leads to a reduction of the threshold perturbation level (vertical displacement in the case of the kicks) necessary for the ELM mitigation to occur. The implication of these results for ITER will be discussed.

  6. Iterative deblending of simultaneous-source data using a coherency-pass shaping operator

    NASA Astrophysics Data System (ADS)

    Zu, Shaohuan; Zhou, Hui; Mao, Weijian; Zhang, Dong; Li, Chao; Pan, Xiao; Chen, Yangkang

    2017-10-01

    Simultaneous-source acquisition helps greatly boost an economic saving, while it brings an unprecedented challenge of removing the crosstalk interference in the recorded seismic data. In this paper, we propose a novel iterative method to separate the simultaneous source data based on a coherency-pass shaping operator. The coherency-pass filter is used to constrain the model, that is, the unblended data to be estimated, in the shaping regularization framework. In the simultaneous source survey, the incoherent interference from adjacent shots greatly increases the rank of the frequency domain Hankel matrix that is formed from the blended record. Thus, the method based on rank reduction is capable of separating the blended record to some extent. However, the shortcoming is that it may cause residual noise when there is strong blending interference. We propose to cascade the rank reduction and thresholding operators to deal with this issue. In the initial iterations, we adopt a small rank to severely separate the blended interference and a large thresholding value as strong constraints to remove the residual noise in the time domain. In the later iterations, since more and more events have been recovered, we weaken the constraint by increasing the rank and shrinking the threshold to recover weak events and to guarantee the convergence. In this way, the combined rank reduction and thresholding strategy acts as a coherency-pass filter, which only passes the coherent high-amplitude component after rank reduction instead of passing both signal and noise in traditional rank reduction based approaches. Two synthetic examples are tested to demonstrate the performance of the proposed method. In addition, the application on two field data sets (common receiver gathers and stacked profiles) further validate the effectiveness of the proposed method.

  7. Establishing Factor Validity Using Variable Reduction in Confirmatory Factor Analysis.

    ERIC Educational Resources Information Center

    Hofmann, Rich

    1995-01-01

    Using a 21-statement attitude-type instrument, an iterative procedure for improving confirmatory model fit is demonstrated within the context of the EQS program of P. M. Bentler and maximum likelihood factor analysis. Each iteration systematically eliminates the poorest fitting statement as identified by a variable fit index. (SLD)

  8. Iterative methods used in overlap astrometric reduction techniques do not always converge

    NASA Astrophysics Data System (ADS)

    Rapaport, M.; Ducourant, C.; Colin, J.; Le Campion, J. F.

    1993-04-01

    In this paper we prove that the classical Gauss-Seidel type iterative methods used for the solution of the reduced normal equations occurring in overlapping reduction methods of astrometry do not always converge. We exhibit examples of divergence. We then analyze an alternative algorithm proposed by Wang (1985). We prove the consistency of this algorithm and verify that it can be convergent while the Gauss-Seidel method is divergent. We conjecture the convergence of Wang method for the solution of astrometric problems using overlap techniques.

  9. Iterative metal artifact reduction for x-ray computed tomography using unmatched projector/backprojector pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hanming; Wang, Linyuan; Li, Lei

    2016-06-15

    Purpose: Metal artifact reduction (MAR) is a major problem and a challenging issue in x-ray computed tomography (CT) examinations. Iterative reconstruction from sinograms unaffected by metals shows promising potential in detail recovery. This reconstruction has been the subject of much research in recent years. However, conventional iterative reconstruction methods easily introduce new artifacts around metal implants because of incomplete data reconstruction and inconsistencies in practical data acquisition. Hence, this work aims at developing a method to suppress newly introduced artifacts and improve the image quality around metal implants for the iterative MAR scheme. Methods: The proposed method consists of twomore » steps based on the general iterative MAR framework. An uncorrected image is initially reconstructed, and the corresponding metal trace is obtained. The iterative reconstruction method is then used to reconstruct images from the unaffected sinogram. In the reconstruction step of this work, an iterative strategy utilizing unmatched projector/backprojector pairs is used. A ramp filter is introduced into the back-projection procedure to restrain the inconsistency components in low frequencies and generate more reliable images of the regions around metals. Furthermore, a constrained total variation (TV) minimization model is also incorporated to enhance efficiency. The proposed strategy is implemented based on an iterative FBP and an alternating direction minimization (ADM) scheme, respectively. The developed algorithms are referred to as “iFBP-TV” and “TV-FADM,” respectively. Two projection-completion-based MAR methods and three iterative MAR methods are performed simultaneously for comparison. Results: The proposed method performs reasonably on both simulation and real CT-scanned datasets. This approach could reduce streak metal artifacts effectively and avoid the mentioned effects in the vicinity of the metals. The improvements are evaluated by inspecting regions of interest and by comparing the root-mean-square errors, normalized mean absolute distance, and universal quality index metrics of the images. Both iFBP-TV and TV-FADM methods outperform other counterparts in all cases. Unlike the conventional iterative methods, the proposed strategy utilizing unmatched projector/backprojector pairs shows excellent performance in detail preservation and prevention of the introduction of new artifacts. Conclusions: Qualitative and quantitative evaluations of experimental results indicate that the developed method outperforms classical MAR algorithms in suppressing streak artifacts and preserving the edge structural information of the object. In particular, structures lying close to metals can be gradually recovered because of the reduction of artifacts caused by inconsistency effects.« less

  10. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). Inmore » each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.« less

  11. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.

    PubMed

    Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei

    2013-03-01

    A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.

  12. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    PubMed Central

    Fahimian, Benjamin P.; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J.; Osher, Stanley J.; McNitt-Gray, Michael F.; Miao, Jianwei

    2013-01-01

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method. PMID:23464329

  13. Neuroanatomical profiles of alexithymia dimensions and subtypes.

    PubMed

    Goerlich-Dobre, Katharina Sophia; Votinov, Mikhail; Habel, Ute; Pripfl, Juergen; Lamm, Claus

    2015-10-01

    Alexithymia, a major risk factor for a range of psychiatric and neurological disorders, has been recognized to comprise two dimensions, a cognitive dimension (difficulties identifying, analyzing, and verbalizing feelings) and an affective one (difficulties emotionalizing and fantasizing). Based on these dimensions, the existence of four distinct alexithymia subtypes has been proposed, but never empirically tested. In this study, 125 participants were assigned to four groups corresponding to the proposed alexithymia subtypes: Type I (impairment on both dimensions), Type II (impairment on the cognitive, but not the affective dimension), Type III (impairment on the affective, but not the cognitive dimension), and Lexithymics (no impairment on either dimension). By means of voxel-based morphometry, associations of the alexithymia dimensions and subtypes with gray and white matter volumes were analyzed. Type I and Type II alexithymia were characterized by gray matter volume reductions in the left amygdala and the thalamus. The cognitive dimension was further linked to volume reductions in the right amygdala, left posterior insula, precuneus, caudate, hippocampus, and parahippocampus. Type III alexithymia was marked by volume reduction in the MCC only, and the affective dimension was further characterized by larger sgACC volume. Moreover, individuals with the intermediate alexithymia Types II and III showed gray matter volume reductions in distinct regions, and had larger corpus callosum volumes compared to Lexithymics. These results substantiate the notion of a differential impact of the cognitive and affective alexithymia dimensions on brain morphology and provide evidence for separable neuroanatomical representations of the different alexithymia subtypes. © 2015 Wiley Periodicals, Inc.

  14. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  15. Gas Flows in Rocket Motors. Volume 3. Appendix D. Computer Code Listings

    DTIC Science & Technology

    1989-08-01

    Information Service, where it will be available to the general public, including foreign nationals. Prepared for the Astronautics Laboratory (AFSC) Air Force...SYMIMETRIC TRANSONIC NOZZLE FLOW~ IN CENEPAL COORDINATE SYSTEM C+ USING TIME ITERATIVE CD/’CD SCHEME * c VIITH THIN-LAYER APPROXIMATED NAVIER-STOIKE’S...Q( 1,1, 2) ,RHOU( 1, 1)), DIMENSION ADD(4) DIMENSION PRE(4,4), PADD (4) C SAI-DIRECTION ENTRY ADDX COF:F=O.125D0*OMEGAX DO 70 J=I,,JL DO 70 I=1,IL IF

  16. Full dose reduction potential of statistical iterative reconstruction for head CT protocols in a predominantly pediatric population

    PubMed Central

    Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.

    2016-01-01

    Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425

  17. Response to Intervention: "Lore v. Law"

    ERIC Educational Resources Information Center

    Zirkel, Perry A.

    2018-01-01

    The legal dimension of response to intervention (RTI) has been the subject of considerable professional confusion. This brief article addresses the issue in three parts. The first part provides an update of a previous iteration that compared 12 common conceptions, referred to here as the "lore," with an objective synthesis of the…

  18. Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2018-01-15

    RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in parallel tiled code implementing Nussinov's RNA folding. Experimental results, received on modern Intel multi-core processors, demonstrate that this code outperforms known closely related implementations when the length of RNA strands is bigger than 2500.

  19. SU-F-P-45: Clinical Experience with Radiation Dose Reduction of CT Examinations Using Iterative Reconstruction Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weir, V; Zhang, J

    2016-06-15

    Purpose: Iterative reconstruction (IR) algorithms have been adopted by medical centers in the past several years. IR has a potential to substantially reduce patient dose while maintaining or improving image quality. This study characterizes dose reductions in clinical settings for CT examinations using IR. Methods: We retrospectively analyzed dose information from patients who underwent abdomen/pelvis CT examinations with and without contrast media in multiple locations of our Healthcare system. A total of 743 patients scanned with ASIR on 64 slice GE lightspeed VCTs at three sites, and 30 patients scanned with SAFIRE on a Siemens 128 slice Definition Flash inmore » one site was retrieved. For comparison, patient data (n=291) from a GE scanner and patient data (n=61) from two Siemens scanners where filtered back-projection (FBP) was used was collected retrospectively. 30% and 10% ASIR, and SAFIRE Level 2 was used. CTDIvol, Dose-length-product (DLP), weight and height from all patients was recorded. Body mass index (BMI) was calculated accordingly. To convert CTDIvol to SSDE, AP and lateral dimensions at the mid-liver level was measured for each patient. Results: Compared with FBP, 30% ASIR reduces dose by 44.1% (SSDE: 12.19mGy vs. 21.83mGy), while 10% ASIR reduced dose by 20.6% (SSDE 17.32mGy vs. 21.83). Use of SAFIRE reduced dose by 61.4% (SSDE: 8.77mGy vs. 22.7mGy). The geometric mean for patients scanned with ASIR was larger than for patients scanned with FBP (geometric mean is 297.48 mmm vs. 284.76 mm). The same trend was observed for the Siemens scanner where SAFIRE was used (geometric mean: 316 mm with SAFIRE vs. 239 mm with FBP). Patient size differences suggest that further dose reduction is possible. Conclusion: Our data confirmed that in clinical practice IR can significantly reduce dose to patients who undergo CT examinations, while meeting diagnostic requirements for image quality.« less

  20. Dynamic and accretive composition of patient engagement instruments for personalized plan generation.

    PubMed

    Hsueh, Pei-Yun S; Zhu, Xinxin; Deng, Vincent; Ramarishnan, Sreeram; Ball, Marion

    2014-01-01

    Patient engagement is important to help patients become more informed and active in managing their health. Effective patient engagement demands short, yet valid instruments for measuring self-efficacy in various care dimensions. However, the static instruments are often too lengthy to be effective for assessment purposes. Furthermore, these tests could neither account for the dynamicity of measurements over time, nor differentiate care dimensions that are more critical to certain sub-populations. To remedy these disadvantages, we devise a dynamic instrument composition approach that can model the measurement of patient self-efficacy over time and iteratively select critical care dimensions and appropriate assessment questions based on dynamic user categorization. The dynamically composed instruments are expected to guide patients through self-management reinforcement cycles within or across care dimensions, while tightly integrated into clinical workflow and standard care processes.

  1. Tradeoff between noise reduction and inartificial visualization in a model-based iterative reconstruction algorithm on coronary computed tomography angiography.

    PubMed

    Hirata, Kenichiro; Utsunomiya, Daisuke; Kidoh, Masafumi; Funama, Yoshinori; Oda, Seitaro; Yuki, Hideaki; Nagayama, Yasunori; Iyama, Yuji; Nakaura, Takeshi; Sakabe, Daisuke; Tsujita, Kenichi; Yamashita, Yasuyuki

    2018-05-01

    We aimed to evaluate the image quality performance of coronary CT angiography (CTA) under the different settings of forward-projected model-based iterative reconstruction solutions (FIRST).Thirty patients undergoing coronary CTA were included. Each image was reconstructed using filtered back projection (FBP), adaptive iterative dose reduction 3D (AIDR-3D), and 2 model-based iterative reconstructions including FIRST-body and FIRST-cardiac sharp (CS). CT number and noise were measured in the coronary vessels and plaque. Subjective image-quality scores were obtained for noise and structure visibility.In the objective image analysis, FIRST-body produced the significantly highest contrast-to-noise ratio. Regarding subjective image quality, FIRST-CS had the highest score for structure visibility, although the image noise score was inferior to that of FIRST-body.In conclusion, FIRST provides significant improvements in objective and subjective image quality compared with FBP and AIDR-3D. FIRST-body effectively reduces image noise, but the structure visibility with FIRST-CS was superior to FIRST-body.

  2. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Iterative CT reconstruction using coordinate descent with ordered subsets of data

    NASA Astrophysics Data System (ADS)

    Noo, F.; Hahn, K.; Schöndube, H.; Stierstorfer, K.

    2016-04-01

    Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or improving image quality. One important issue associated with this iterative image reconstruction concept is slow convergence and the associated computational effort. For this reason, there is interest in finding methods that produce approximate versions of the targeted image with a small number of iterations and an acceptable level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce, within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain the noise properties of the targeted image.

  4. [Lithology feature extraction of CASI hyperspectral data based on fractal signal algorithm].

    PubMed

    Tang, Chao; Chen, Jian-Ping; Cui, Jing; Wen, Bo-Tao

    2014-05-01

    Hyperspectral data is characterized by combination of image and spectrum and large data volume dimension reduction is the main research direction. Band selection and feature extraction is the primary method used for this objective. In the present article, the authors tested methods applied for the lithology feature extraction from hyperspectral data. Based on the self-similarity of hyperspectral data, the authors explored the application of fractal algorithm to lithology feature extraction from CASI hyperspectral data. The "carpet method" was corrected and then applied to calculate the fractal value of every pixel in the hyperspectral data. The results show that fractal information highlights the exposed bedrock lithology better than the original hyperspectral data The fractal signal and characterized scale are influenced by the spectral curve shape, the initial scale selection and iteration step. At present, research on the fractal signal of spectral curve is rare, implying the necessity of further quantitative analysis and investigation of its physical implications.

  5. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    PubMed

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Model-based iterative reconstruction for reduction of radiation dose in abdominopelvic CT: comparison to adaptive statistical iterative reconstruction.

    PubMed

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2013-12-01

    To evaluate dose reduction and image quality of abdominopelvic computed tomography (CT) reconstructed with model-based iterative reconstruction (MBIR) compared to adaptive statistical iterative reconstruction (ASIR). In this prospective study, 85 patients underwent referential-, low-, and ultralow-dose unenhanced abdominopelvic CT. Images were reconstructed with ASIR for low-dose (L-ASIR) and ultralow-dose CT (UL-ASIR), and with MBIR for ultralow-dose CT (UL-MBIR). Image noise was measured in the abdominal aorta and iliopsoas muscle. Subjective image analyses and a lesion detection study (adrenal nodules) were conducted by two blinded radiologists. A reference standard was established by a consensus panel of two different radiologists using referential-dose CT reconstructed with filtered back projection. Compared to low-dose CT, there was a 63% decrease in dose-length product with ultralow-dose CT. UL-MBIR had significantly lower image noise than L-ASIR and UL-ASIR (all p<0.01). UL-MBIR was significantly better for subjective image noise and streak artifacts than L-ASIR and UL-ASIR (all p<0.01). There were no significant differences between UL-MBIR and L-ASIR in diagnostic acceptability (p>0.65), or diagnostic performance for adrenal nodules (p>0.87). MBIR significantly improves image noise and streak artifacts compared to ASIR, and can achieve radiation dose reduction without severely compromising image quality.

  7. A new approach for solving the three-dimensional steady Euler equations. I - General theory

    NASA Technical Reports Server (NTRS)

    Chang, S.-C.; Adamczyk, J. J.

    1986-01-01

    The present iterative procedure combines the Clebsch potentials and the Munk-Prim (1947) substitution principle with an extension of a semidirect Cauchy-Riemann solver to three dimensions, in order to solve steady, inviscid three-dimensional rotational flow problems in either subsonic or incompressible flow regimes. This solution procedure can be used, upon discretization, to obtain inviscid subsonic flow solutions in a 180-deg turning channel. In addition to accurately predicting the behavior of weak secondary flows, the algorithm can generate solutions for strong secondary flows and will yield acceptable flow solutions after only 10-20 outer loop iterations.

  8. A new approach for solving the three-dimensional steady Euler equations. I - General theory

    NASA Astrophysics Data System (ADS)

    Chang, S.-C.; Adamczyk, J. J.

    1986-08-01

    The present iterative procedure combines the Clebsch potentials and the Munk-Prim (1947) substitution principle with an extension of a semidirect Cauchy-Riemann solver to three dimensions, in order to solve steady, inviscid three-dimensional rotational flow problems in either subsonic or incompressible flow regimes. This solution procedure can be used, upon discretization, to obtain inviscid subsonic flow solutions in a 180-deg turning channel. In addition to accurately predicting the behavior of weak secondary flows, the algorithm can generate solutions for strong secondary flows and will yield acceptable flow solutions after only 10-20 outer loop iterations.

  9. Penalized Weighted Least-Squares Approach to Sinogram Noise Reduction and Image Reconstruction for Low-Dose X-Ray Computed Tomography

    PubMed Central

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-01-01

    Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831

  10. Six sigma: process of understanding the control and capability of ranitidine hydrochloride tablet.

    PubMed

    Chabukswar, Ar; Jagdale, Sc; Kuchekar, Bs; Joshi, Vd; Deshmukh, Gr; Kothawade, Hs; Kuckekar, Ab; Lokhande, Pd

    2011-01-01

    The process of understanding the control and capability (PUCC) is an iterative closed loop process for continuous improvement. It covers the DMAIC toolkit in its three phases. PUCC is an iterative approach that rotates between the three pillars of the process of understanding, process control, and process capability, with each iteration resulting in a more capable and robust process. It is rightly said that being at the top is a marathon and not a sprint. The objective of the six sigma study of Ranitidine hydrochloride tablets is to achieve perfection in tablet manufacturing by reviewing the present robust manufacturing process, to find out ways to improve and modify the process, which will yield tablets that are defect-free and will give more customer satisfaction. The application of six sigma led to an improved process capability, due to the improved sigma level of the process from 1.5 to 4, a higher yield, due to reduced variation and reduction of thick tablets, reduction in packing line stoppages, reduction in re-work by 50%, a more standardized process, with smooth flow and change in coating suspension reconstitution level (8%w/w), a huge cost reduction of approximately Rs.90 to 95 lakhs per annum, an improved overall efficiency by 30% approximately, and improved overall quality of the product.

  11. Six Sigma: Process of Understanding the Control and Capability of Ranitidine Hydrochloride Tablet

    PubMed Central

    Chabukswar, AR; Jagdale, SC; Kuchekar, BS; Joshi, VD; Deshmukh, GR; Kothawade, HS; Kuckekar, AB; Lokhande, PD

    2011-01-01

    The process of understanding the control and capability (PUCC) is an iterative closed loop process for continuous improvement. It covers the DMAIC toolkit in its three phases. PUCC is an iterative approach that rotates between the three pillars of the process of understanding, process control, and process capability, with each iteration resulting in a more capable and robust process. It is rightly said that being at the top is a marathon and not a sprint. The objective of the six sigma study of Ranitidine hydrochloride tablets is to achieve perfection in tablet manufacturing by reviewing the present robust manufacturing process, to find out ways to improve and modify the process, which will yield tablets that are defect-free and will give more customer satisfaction. The application of six sigma led to an improved process capability, due to the improved sigma level of the process from 1.5 to 4, a higher yield, due to reduced variation and reduction of thick tablets, reduction in packing line stoppages, reduction in re-work by 50%, a more standardized process, with smooth flow and change in coating suspension reconstitution level (8%w/w), a huge cost reduction of approximately Rs.90 to 95 lakhs per annum, an improved overall efficiency by 30% approximately, and improved overall quality of the product. PMID:21607050

  12. Efficient solutions to the Euler equations for supersonic flow with embedded subsonic regions

    NASA Technical Reports Server (NTRS)

    Walters, Robert W.; Dwoyer, Douglas L.

    1987-01-01

    A line Gauss-Seidel (LGS) relaxation algorithm in conjunction with a one-parameter family of upwind discretizations of the Euler equations in two dimensions is described. Convergence of the basic algorithm to the steady state is quadratic for fully supersonic flows and is linear for other flows. This is in contrast to the block alternating direction implicit methods (either central or upwind differenced) and the upwind biased relaxation schemes, all of which converge linearly, independent of the flow regime. Moreover, the algorithm presented herein is easily coupled with methods to detect regions of subsonic flow embedded in supersonic flow. This allows marching by lines in the supersonic regions, converging each line quadratically, and iterating in the subsonic regions, and yields a very efficient iteration strategy. Numerical results are presented for two-dimensional supersonic and transonic flows containing oblique and normal shock waves which confirm the efficiency of the iteration strategy.

  13. Comparison of Knowledge-based Iterative Model Reconstruction and Hybrid Reconstruction Techniques for Liver CT Evaluation of Hypervascular Hepatocellular Carcinoma.

    PubMed

    Park, Hyun Jeong; Lee, Jeong Min; Park, Sung Bin; Lee, Jong Beum; Jeong, Yoong Ki; Yoon, Jeong Hee

    The purpose of this work was to evaluate the image quality, lesion conspicuity, and dose reduction provided by knowledge-based iterative model reconstruction (IMR) in computed tomography (CT) of the liver compared with hybrid iterative reconstruction (IR) and filtered back projection (FBP) in patients with hepatocellular carcinoma (HCC). Fifty-six patients with 61 HCCs who underwent multiphasic reduced-dose CT (RDCT; n = 33) or standard-dose CT (SDCT; n = 28) were retrospectively evaluated. Reconstructed images with FBP, hybrid IR (iDose), IMR were evaluated for image quality using CT attenuation and image noise. Objective and subjective image quality of RDCT and SDCT sets were independently assessed by 2 observers in a blinded manner. Image quality and lesion conspicuity were better with IMR for both RDCT and SDCT than either FBP or IR (P < 0.001). Contrast-to-noise ratio of HCCs in IMR-RDCT was significantly higher on delayed phase (DP) (P < 0.001), and comparable on arterial phase, than with IR-SDCT (P = 0.501). Iterative model reconstruction RDCT was significantly superior to FBP-SDCT (P < 0.001). Compared with IR-SDCT, IMR-RDCT was comparable in image sharpness and tumor conspicuity on arterial phase, and superior in image quality, noise, and lesion conspicuity on DP. With the use of IMR, a 27% reduction of effective dose was achieved with RDCT (12.7 ± 0.6 mSv) compared with SDCT (17.4 ± 1.1 mSv) without loss of image quality (P < 0.001). Iterative model reconstruction provides better image quality and tumor conspicuity than FBP and IR with considerable noise reduction. In addition, more than comparable results were achieved with IMR-RDCT to IR-SDCT for the evaluation of HCCs.

  14. The fractal geometry of Hartree-Fock

    NASA Astrophysics Data System (ADS)

    Theel, Friethjof; Karamatskou, Antonia; Santra, Robin

    2017-12-01

    The Hartree-Fock method is an important approximation for the ground-state electronic wave function of atoms and molecules so that its usage is widespread in computational chemistry and physics. The Hartree-Fock method is an iterative procedure in which the electronic wave functions of the occupied orbitals are determined. The set of functions found in one step builds the basis for the next iteration step. In this work, we interpret the Hartree-Fock method as a dynamical system since dynamical systems are iterations where iteration steps represent the time development of the system, as encountered in the theory of fractals. The focus is put on the convergence behavior of the dynamical system as a function of a suitable control parameter. In our case, a complex parameter λ controls the strength of the electron-electron interaction. An investigation of the convergence behavior depending on the parameter λ is performed for helium, neon, and argon. We observe fractal structures in the complex λ-plane, which resemble the well-known Mandelbrot set, determine their fractal dimension, and find that with increasing nuclear charge, the fragmentation increases as well.

  15. Objective performance assessment of five computed tomography iterative reconstruction algorithms.

    PubMed

    Omotayo, Azeez; Elbakri, Idris

    2016-11-22

    Iterative algorithms are gaining clinical acceptance in CT. We performed objective phantom-based image quality evaluation of five commercial iterative reconstruction algorithms available on four different multi-detector CT (MDCT) scanners at different dose levels as well as the conventional filtered back-projection (FBP) reconstruction. Using the Catphan500 phantom, we evaluated image noise, contrast-to-noise ratio (CNR), modulation transfer function (MTF) and noise-power spectrum (NPS). The algorithms were evaluated over a CTDIvol range of 0.75-18.7 mGy on four major MDCT scanners: GE DiscoveryCT750HD (algorithms: ASIR™ and VEO™); Siemens Somatom Definition AS+ (algorithm: SAFIRE™); Toshiba Aquilion64 (algorithm: AIDR3D™); and Philips Ingenuity iCT256 (algorithm: iDose4™). Images were reconstructed using FBP and the respective iterative algorithms on the four scanners. Use of iterative algorithms decreased image noise and increased CNR, relative to FBP. In the dose range of 1.3-1.5 mGy, noise reduction using iterative algorithms was in the range of 11%-51% on GE DiscoveryCT750HD, 10%-52% on Siemens Somatom Definition AS+, 49%-62% on Toshiba Aquilion64, and 13%-44% on Philips Ingenuity iCT256. The corresponding CNR increase was in the range 11%-105% on GE, 11%-106% on Siemens, 85%-145% on Toshiba and 13%-77% on Philips respectively. Most algorithms did not affect the MTF, except for VEO™ which produced an increase in the limiting resolution of up to 30%. A shift in the peak of the NPS curve towards lower frequencies and a decrease in NPS amplitude were obtained with all iterative algorithms. VEO™ required long reconstruction times, while all other algorithms produced reconstructions in real time. Compared to FBP, iterative algorithms reduced image noise and increased CNR. The iterative algorithms available on different scanners achieved different levels of noise reduction and CNR increase while spatial resolution improvements were obtained only with VEO™. This study is useful in that it provides performance assessment of the iterative algorithms available from several mainstream CT manufacturers.

  16. The dimensional reduction method for identification of parameters that trade-off due to similar model roles.

    PubMed

    Davidson, Shaun M; Docherty, Paul D; Murray, Rua

    2017-03-01

    Parameter identification is an important and widely used process across the field of biomedical engineering. However, it is susceptible to a number of potential difficulties, such as parameter trade-off, causing premature convergence at non-optimal parameter values. The proposed Dimensional Reduction Method (DRM) addresses this issue by iteratively reducing the dimension of hyperplanes where trade off occurs, and running subsequent identification processes within these hyperplanes. The DRM was validated using clinical data to optimize 4 parameters of the widely used Bergman Minimal Model of glucose and insulin kinetics, as well as in-silico data to optimize 5 parameters of the Pulmonary Recruitment (PR) Model. Results were compared with the popular Levenberg-Marquardt (LMQ) Algorithm using a Monte-Carlo methodology, with both methods afforded equivalent computational resources. The DRM converged to a lower or equal residual value in all tests run using the Bergman Minimal Model and actual patient data. For the PR model, the DRM attained significantly lower overall median parameter error values and lower residuals in the vast majority of tests. This shows the DRM has potential to provide better resolution of optimum parameter values for the variety of biomedical models in which significant levels of parameter trade-off occur. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. ADART: an adaptive algebraic reconstruction algorithm for discrete tomography.

    PubMed

    Maestre-Deusto, F Javier; Scavello, Giovanni; Pizarro, Joaquín; Galindo, Pedro L

    2011-08-01

    In this paper we suggest an algorithm based on the Discrete Algebraic Reconstruction Technique (DART) which is capable of computing high quality reconstructions from substantially fewer projections than required for conventional continuous tomography. Adaptive DART (ADART) goes a step further than DART on the reduction of the number of unknowns of the associated linear system achieving a significant reduction in the pixel error rate of reconstructed objects. The proposed methodology automatically adapts the border definition criterion at each iteration, resulting in a reduction of the number of pixels belonging to the border, and consequently of the number of unknowns in the general algebraic reconstruction linear system to be solved, being this reduction specially important at the final stage of the iterative process. Experimental results show that reconstruction errors are considerably reduced using ADART when compared to original DART, both in clean and noisy environments.

  18. Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.

    2016-10-01

    With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.

  19. How Measurement and Modeling of Attendance Matter to Assessing Dimensions of Inequality

    ERIC Educational Resources Information Center

    Dougherty, Shaun M.

    2018-01-01

    Each iteration of high stakes accountability has included requirements to include measures of attendance in their accountability programs, thereby increasing the salience of this measure. Researchers too have turned to attendance and chronic absence as important outcomes in evaluations and policy studies. Often, too little attention is paid to the…

  20. User Acceptance of a Haptic Interface for Learning Anatomy

    ERIC Educational Resources Information Center

    Yeom, Soonja; Choi-Lundberg, Derek; Fluck, Andrew; Sale, Arthur

    2013-01-01

    Visualizing the structure and relationships in three dimensions (3D) of organs is a challenge for students of anatomy. To provide an alternative way of learning anatomy engaging multiple senses, we are developing a force-feedback (haptic) interface for manipulation of 3D virtual organs, using design research methodology, with iterations of system…

  1. A diffusion-based truncated projection artifact reduction method for iterative digital breast tomosynthesis reconstruction

    PubMed Central

    Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M

    2014-01-01

    Digital breast tomosynthesis (DBT) has strong promise to improve sensitivity for detecting breast cancer. DBT reconstruction estimates the breast tissue attenuation using projection views (PVs) acquired in a limited angular range. Because of the limited field of view (FOV) of the detector, the PVs may not completely cover the breast in the x-ray source motion direction at large projection angles. The voxels in the imaged volume cannot be updated when they are outside the FOV, thus causing a discontinuity in intensity across the FOV boundaries in the reconstructed slices, which we refer to as the truncated projection artifact (TPA). Most existing TPA reduction methods were developed for the filtered backprojection method in the context of computed tomography. In this study, we developed a new diffusion-based method to reduce TPAs during DBT reconstruction using the simultaneous algebraic reconstruction technique (SART). Our TPA reduction method compensates for the discontinuity in background intensity outside the FOV of the current PV after each PV updating in SART. The difference in voxel values across the FOV boundary is smoothly diffused to the region beyond the FOV of the current PV. Diffusion-based background intensity estimation is performed iteratively to avoid structured artifacts. The method is applicable to TPA in both the forward and backward directions of the PVs and for any number of iterations during reconstruction. The effectiveness of the new method was evaluated by comparing the visual quality of the reconstructed slices and the measured discontinuities across the TPA with and without artifact correction at various iterations. The results demonstrated that the diffusion-based intensity compensation method reduced the TPA while preserving the detailed tissue structures. The visibility of breast lesions obscured by the TPA was improved after artifact reduction. PMID:23318346

  2. An automated construction of error models for uncertainty quantification and model calibration

    NASA Astrophysics Data System (ADS)

    Josset, L.; Lunati, I.

    2015-12-01

    To reduce the computational cost of stochastic predictions, it is common practice to rely on approximate flow solvers (or «proxy»), which provide an inexact, but computationally inexpensive response [1,2]. Error models can be constructed to correct the proxy response: based on a learning set of realizations for which both exact and proxy simulations are performed, a transformation is sought to map proxy into exact responses. Once the error model is constructed a prediction of the exact response is obtained at the cost of a proxy simulation for any new realization. Despite its effectiveness [2,3], the methodology relies on several user-defined parameters, which impact the accuracy of the predictions. To achieve a fully automated construction, we propose a novel methodology based on an iterative scheme: we first initialize the error model with a small training set of realizations; then, at each iteration, we add a new realization both to improve the model and to evaluate its performance. More specifically, at each iteration we use the responses predicted by the updated model to identify the realizations that need to be considered to compute the quantity of interest. Another user-defined parameter is the number of dimensions of the response spaces between which the mapping is sought. To identify the space dimensions that optimally balance mapping accuracy and risk of overfitting, we follow a Leave-One-Out Cross Validation. Also, the definition of a stopping criterion is central to an automated construction. We use a stability measure based on bootstrap techniques to stop the iterative procedure when the iterative model has converged. The methodology is illustrated with two test cases in which an inverse problem has to be solved and assess the performance of the method. We show that an iterative scheme is crucial to increase the applicability of the approach. [1] Josset, L., and I. Lunati, Local and global error models for improving uncertainty quantification, Math.ematical Geosciences, 2013 [2] Josset, L., D. Ginsbourger, and I. Lunati, Functional Error Modeling for uncertainty quantification in hydrogeology, Water Resources Research, 2015 [3] Josset, L., V. Demyanov, A.H. Elsheikhb, and I. Lunati, Accelerating Monte Carlo Markov chains with proxy and error models, Computer & Geosciences, 2015 (In press)

  3. Metal artefact reduction for patients with metallic dental fillings in helical neck computed tomography: comparison of adaptive iterative dose reduction 3D (AIDR 3D), forward-projected model-based iterative reconstruction solution (FIRST) and AIDR 3D with single-energy metal artefact reduction (SEMAR).

    PubMed

    Yasaka, Koichiro; Kamiya, Kouhei; Irie, Ryusuke; Maeda, Eriko; Sato, Jiro; Ohtomo, Kuni

    To compare the differences in metal artefact degree and the depiction of structures in helical neck CT, in patients with metallic dental fillings, among adaptive iterative dose reduction three dimensional (AIDR 3D), forward-projected model-based iterative reconstruction solution (FIRST) and AIDR 3D with single-energy metal artefact reduction (SEMAR-A). In this retrospective clinical study, 22 patients (males, 13; females, 9; mean age, 64.6 ± 12.6 years) with metallic dental fillings who underwent contrast-enhanced helical CT involving the oropharyngeal region were included. Neck axial images were reconstructed with AIDR 3D, FIRST and SEMAR-A. Metal artefact degree and depiction of structures (the apex and root of the tongue, parapharyngeal space, superior portion of the internal jugular chain and parotid gland) were evaluated on a four-point scale by two radiologists. Placing regions of interest, standard deviations of the oral cavity and nuchal muscle (at the slice where no metal exists) were measured and metal artefact indices were calculated (the square root of the difference of the squares of them). In SEMAR-A, metal artefact was significantly reduced and depictions of all structures were significantly improved compared with those in FIRST and AIDR 3D (p ≤ 0.001, sign test). Metal artefact index for the oral cavity in AIDR 3D/FIRST/SEMAR-A was 572.0/477.7/88.4, and significant differences were seen between each reconstruction algorithm (p < 0.0001, Wilcoxon signed-rank test). SEMAR-A could provide images with lesser metal artefact and better depiction of structures than AIDR 3D and FIRST.

  4. Numerical study and ex vivo assessment of HIFU treatment time reduction through optimization of focal point trajectory

    NASA Astrophysics Data System (ADS)

    Grisey, A.; Yon, S.; Pechoux, T.; Letort, V.; Lafitte, P.

    2017-03-01

    Treatment time reduction is a key issue to expand the use of high intensity focused ultrasound (HIFU) surgery, especially for benign pathologies. This study aims at quantitatively assessing the potential reduction of the treatment time arising from moving the focal point during long pulses. In this context, the optimization of the focal point trajectory is crucial to achieve a uniform thermal dose repartition and avoid boiling. At first, a numerical optimization algorithm was used to generate efficient trajectories. Thermal conduction was simulated in 3D with a finite difference code and damages to the tissue were modeled using the thermal dose formula. Given an initial trajectory, the thermal dose field was first computed, then, making use of Pontryagin's maximum principle, the trajectory was iteratively refined. Several initial trajectories were tested. Then, an ex vivo study was conducted in order to validate the efficicency of the resulting optimized strategies. Single pulses were performed at 3MHz on fresh veal liver samples with an Echopulse and the size of each unitary lesion was assessed by cutting each sample along three orthogonal planes and measuring the dimension of the whitened area based on photographs. We propose a promising approach to significantly shorten HIFU treatment time: the numerical optimization algorithm was shown to provide a reliable insight on trajectories that can improve treatment strategies. The model must now be improved in order to take in vivo conditions into account and extensively validated.

  5. Emerging Techniques for Dose Optimization in Abdominal CT

    PubMed Central

    Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit

    2014-01-01

    Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277

  6. Iterative approach as alternative to S-matrix in modal methods

    NASA Astrophysics Data System (ADS)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  7. Impact of view reduction in CT on radiation dose for patients

    NASA Astrophysics Data System (ADS)

    Parcero, E.; Flores, L.; Sánchez, M. G.; Vidal, V.; Verdú, G.

    2017-08-01

    Iterative methods have become a hot topic of research in computed tomography (CT) imaging because of their capacity to resolve the reconstruction problem from a limited number of projections. This allows the reduction of radiation exposure on patients during the data acquisition. The reconstruction time and the high radiation dose imposed on patients are the two major drawbacks in CT. To solve them effectively we adapted the method for sparse linear equations and sparse least squares (LSQR) with soft threshold filtering (STF) and the fast iterative shrinkage-thresholding algorithm (FISTA) to computed tomography reconstruction. The feasibility of the proposed methods is demonstrated numerically.

  8. Socio-Technical Dimensions of an Outdoor Mobile Learning Environment: A Three-Phase Design-Based Research Investigation

    ERIC Educational Resources Information Center

    Land, Susan M.; Zimmerman, Heather Toomey

    2015-01-01

    This design-based research project examines three iterations of Tree Investigators, a learning environment designed to support science learning outdoors at an arboretum and nature center using mobile devices (iPads). Researchers coded videorecords and artifacts created by children and parents (n = 53) to understand how both social and…

  9. Method of making a silicon nanowire device

    DOEpatents

    None, None

    2017-05-23

    There is provided an electronic device and a method for its manufacture. The device comprises an elongate silicon nanowire less than 0.5 .mu.m in cross-sectional dimensions and having a hexagonal cross-sectional shape due to annealing-induced energy relaxation. The method, in examples, includes thinning the nanowire through iterative oxidation and etching of the oxidized portion.

  10. Ckmeans.1d.dp: Optimal k-means Clustering in One Dimension by Dynamic Programming.

    PubMed

    Wang, Haizhou; Song, Mingzhou

    2011-12-01

    The heuristic k -means algorithm, widely used for cluster analysis, does not guarantee optimality. We developed a dynamic programming algorithm for optimal one-dimensional clustering. The algorithm is implemented as an R package called Ckmeans.1d.dp . We demonstrate its advantage in optimality and runtime over the standard iterative k -means algorithm.

  11. Factors Contributing to Cognitive Absorption and Grounded Learning Effectiveness in a Competitive Business Marketing Simulation

    ERIC Educational Resources Information Center

    Baker, David Scott; Underwood, James, III; Thakur, Ramendra

    2017-01-01

    This study aimed to establish a pedagogical positioning of a business marketing simulation as a grounded learning teaching tool and empirically assess the dimensions of cognitive absorption related to grounded learning effectiveness in an iterative business simulation environment. The method/design and sample consisted of a field study survey…

  12. Challenging the Teaching of Global Ethical Unity: Religious Ethical Claims as Democratic Iterations within Sustainability Didactics

    ERIC Educational Resources Information Center

    Franck, Olof

    2017-01-01

    The aim of this article is to highlight the role of religiously motivated ethics within the field of sustainability didactics. The article starts with critical reflections on the idea that religion, by proposing claims for knowledge of absolute authorities such as "divine beings or supernatural dimensions", offers capacity for uniting…

  13. Cluster Correspondence Analysis.

    PubMed

    van de Velden, M; D'Enza, A Iodice; Palumbo, F

    2017-03-01

    A method is proposed that combines dimension reduction and cluster analysis for categorical data by simultaneously assigning individuals to clusters and optimal scaling values to categories in such a way that a single between variance maximization objective is achieved. In a unified framework, a brief review of alternative methods is provided and we show that the proposed method is equivalent to GROUPALS applied to categorical data. Performance of the methods is appraised by means of a simulation study. The results of the joint dimension reduction and clustering methods are compared with the so-called tandem approach, a sequential analysis of dimension reduction followed by cluster analysis. The tandem approach is conjectured to perform worse when variables are added that are unrelated to the cluster structure. Our simulation study confirms this conjecture. Moreover, the results of the simulation study indicate that the proposed method also consistently outperforms alternative joint dimension reduction and clustering methods.

  14. Dimension reduction techniques for the integrative analysis of multi-omics data

    PubMed Central

    Zeleznik, Oana A.; Thallinger, Gerhard G.; Kuster, Bernhard; Gholami, Amin M.

    2016-01-01

    State-of-the-art next-generation sequencing, transcriptomics, proteomics and other high-throughput ‘omics' technologies enable the efficient generation of large experimental data sets. These data may yield unprecedented knowledge about molecular pathways in cells and their role in disease. Dimension reduction approaches have been widely used in exploratory analysis of single omics data sets. This review will focus on dimension reduction approaches for simultaneous exploratory analyses of multiple data sets. These methods extract the linear relationships that best explain the correlated structure across data sets, the variability both within and between variables (or observations) and may highlight data issues such as batch effects or outliers. We explore dimension reduction techniques as one of the emerging approaches for data integration, and how these can be applied to increase our understanding of biological systems in normal physiological function and disease. PMID:26969681

  15. Determination of representative dimension parameter values of Korean knee joints for knee joint implant design.

    PubMed

    Kwak, Dai Soon; Tao, Quang Bang; Todo, Mitsugu; Jeon, Insu

    2012-05-01

    Knee joint implants developed by western companies have been imported to Korea and used for Korean patients. However, many clinical problems occur in knee joints of Korean patients after total knee joint replacement owing to the geometric mismatch between the western implants and Korean knee joint structures. To solve these problems, a method to determine the representative dimension parameter values of Korean knee joints is introduced to aid in the design of knee joint implants appropriate for Korean patients. Measurements of the dimension parameters of 88 male Korean knee joint subjects were carried out. The distribution of the subjects versus each measured parameter value was investigated. The measured dimension parameter values of each parameter were grouped by suitable intervals called the "size group," and average values of the size groups were calculated. The knee joint subjects were grouped as the "patient group" based on "size group numbers" of each parameter. From the iterative calculations to decrease the errors between the average dimension parameter values of each "patient group" and the dimension parameter values of the subjects, the average dimension parameter values that give less than the error criterion were determined to be the representative dimension parameter values for designing knee joint implants for Korean patients.

  16. Ultralow dose computed tomography attenuation correction for pediatric PET CT using adaptive statistical iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brady, Samuel L., E-mail: samuel.brady@stjude.org; Shulkin, Barry L.

    2015-02-15

    Purpose: To develop ultralow dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultralow doses (10–35 mA s). CT quantitation: noise, low-contrast resolution, and CT numbers for 11 tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% volume computed tomography dose index (0.39/3.64; mGy) from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET imagesmore » were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUV{sub bw}) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative dose reduction and noise control. Results: CT numbers were constant to within 10% from the nondose reduced CTAC image for 90% dose reduction. No change in SUV{sub bw}, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols was found down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62% and 86% (3.2/8.3–0.9/6.2). Noise magnitude in dose-reduced patient images increased but was not statistically different from predose-reduced patient images. Conclusions: Using ASiR allowed for aggressive reduction in CT dose with no change in PET reconstructed images while maintaining sufficient image quality for colocalization of hybrid CT anatomy and PET radioisotope uptake.« less

  17. An efficient iterative model reduction method for aeroviscoelastic panel flutter analysis in the supersonic regime

    NASA Astrophysics Data System (ADS)

    Cunha-Filho, A. G.; Briend, Y. P. J.; de Lima, A. M. G.; Donadon, M. V.

    2018-05-01

    The flutter boundary prediction of complex aeroelastic systems is not an easy task. In some cases, these analyses may become prohibitive due to the high computational cost and time associated with the large number of degrees of freedom of the aeroelastic models, particularly when the aeroelastic model incorporates a control strategy with the aim of suppressing the flutter phenomenon, such as the use of viscoelastic treatments. In this situation, the use of a model reduction method is essential. However, the construction of a modal reduction basis for aeroviscoelastic systems is still a challenge, owing to the inherent frequency- and temperature-dependent behavior of the viscoelastic materials. Thus, the main contribution intended for the present study is to propose an efficient and accurate iterative enriched Ritz basis to deal with aeroviscoelastic systems. The main features and capabilities of the proposed model reduction method are illustrated in the prediction of flutter boundary for a thin three-layer sandwich flat panel and a typical aeronautical stiffened panel, both under supersonic flow.

  18. Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers

    NASA Technical Reports Server (NTRS)

    Guru Prasad, K.; Kane, J. H.

    1992-01-01

    The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.

  19. Performance of multi-aperture grid extraction systems for an ITER-relevant RF-driven negative hydrogen ion source

    NASA Astrophysics Data System (ADS)

    Franzen, P.; Gutser, R.; Fantz, U.; Kraus, W.; Falter, H.; Fröschle, M.; Heinemann, B.; McNeely, P.; Nocentini, R.; Riedl, R.; Stäbler, A.; Wünderlich, D.

    2011-07-01

    The ITER neutral beam system requires a negative hydrogen ion beam of 48 A with an energy of 0.87 MeV, and a negative deuterium beam of 40 A with an energy of 1 MeV. The beam is extracted from a large ion source of dimension 1.9 × 0.9 m2 by an acceleration system consisting of seven grids with 1280 apertures each. Currently, apertures with a diameter of 14 mm in the first grid are foreseen. In 2007, the IPP RF source was chosen as the ITER reference source due to its reduced maintenance compared with arc-driven sources and the successful development at the BATMAN test facility of being equipped with the small IPP prototype RF source ( {\\sim}\\frac{1}{8} of the area of the ITER NBI source). These results, however, were obtained with an extraction system with 8 mm diameter apertures. This paper reports on the comparison of the source performance at BATMAN of an ITER-relevant extraction system equipped with chamfered apertures with a 14 mm diameter and 8 mm diameter aperture extraction system. The most important result is that there is almost no difference in the achieved current density—being consistent with ion trajectory calculations—and the amount of co-extracted electrons. Furthermore, some aspects of the beam optics of both extraction systems are discussed.

  20. Design of a -1 MV dc UHV power supply for ITER NBI

    NASA Astrophysics Data System (ADS)

    Watanabe, K.; Yamamoto, M.; Takemoto, J.; Yamashita, Y.; Dairaku, M.; Kashiwagi, M.; Taniguchi, M.; Tobari, H.; Umeda, N.; Sakamoto, K.; Inoue, T.

    2009-05-01

    Procurement of a dc -1 MV power supply system for the ITER neutral beam injector (NBI) is shared by Japan and the EU. The Japan Atomic Energy Agency as the Japan Domestic Agency (JADA) for ITER contributes to the procurement of dc -1 MV ultra-high voltage (UHV) components such as a dc -1 MV generator, a transmission line and a -1 MV insulating transformer for the ITER NBI power supply. The inverter frequency of 150 Hz in the -1 MV power supply and major circuit parameters have been proposed and adopted in the ITER NBI. The dc UHV insulation has been carefully designed since dc long pulse insulation is quite different from conventional ac insulation or dc short pulse systems. A multi-layer insulation structure of the transformer for a long pulse up to 3600 s has been designed with electric field simulation. Based on the simulation the overall dimensions of the dc UHV components have been finalized. A surge energy suppression system is also essential to protect the accelerator from electric breakdowns. The JADA contributes to provide an effective surge suppression system composed of core snubbers and resistors. Input energy into the accelerator from the power supply can be reduced to about 20 J, which satisfies the design criteria of 50 J in total in the case of breakdown at -1 MV.

  1. Dynamical analysis of Grover's search algorithm in arbitrarily high-dimensional search spaces

    NASA Astrophysics Data System (ADS)

    Jin, Wenliang

    2016-01-01

    We discuss at length the dynamical behavior of Grover's search algorithm for which all the Walsh-Hadamard transformations contained in this algorithm are exposed to their respective random perturbations inducing the augmentation of the dimension of the search space. We give the concise and general mathematical formulations for approximately characterizing the maximum success probabilities of finding a unique desired state in a large unsorted database and their corresponding numbers of Grover iterations, which are applicable to the search spaces of arbitrary dimension and are used to answer a salient open problem posed by Grover (Phys Rev Lett 80:4329-4332, 1998).

  2. Dust measurements in tokamaks (invited).

    PubMed

    Rudakov, D L; Yu, J H; Boedo, J A; Hollmann, E M; Krasheninnikov, S I; Moyer, R A; Muller, S H; Pigarov, A Yu; Rosenberg, M; Smirnov, R D; West, W P; Boivin, R L; Bray, B D; Brooks, N H; Hyatt, A W; Wong, C P C; Roquemore, A L; Skinner, C H; Solomon, W M; Ratynskaia, S; Fenstermacher, M E; Groth, M; Lasnier, C J; McLean, A G; Stangeby, P C

    2008-10-01

    Dust production and accumulation present potential safety and operational issues for the ITER. Dust diagnostics can be divided into two groups: diagnostics of dust on surfaces and diagnostics of dust in plasma. Diagnostics from both groups are employed in contemporary tokamaks; new diagnostics suitable for ITER are also being developed and tested. Dust accumulation in ITER is likely to occur in hidden areas, e.g., between tiles and under divertor baffles. A novel electrostatic dust detector for monitoring dust in these regions has been developed and tested at PPPL. In the DIII-D tokamak dust diagnostics include Mie scattering from Nd:YAG lasers, visible imaging, and spectroscopy. Laser scattering is able to resolve particles between 0.16 and 1.6 microm in diameter; using these data the total dust content in the edge plasmas and trends in the dust production rates within this size range have been established. Individual dust particles are observed by visible imaging using fast framing cameras, detecting dust particles of a few microns in diameter and larger. Dust velocities and trajectories can be determined in two-dimension with a single camera or three-dimension using multiple cameras, but determination of particle size is challenging. In order to calibrate diagnostics and benchmark dust dynamics modeling, precharacterized carbon dust has been injected into the lower divertor of DIII-D. Injected dust is seen by cameras, and spectroscopic diagnostics observe an increase in carbon line (CI, CII, C(2) dimer) and thermal continuum emissions from the injected dust. The latter observation can be used in the design of novel dust survey diagnostics.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rudakov, D. L.; Yu, J. H.; Boedo, J. A.

    Dust production and accumulation present potential safety and operational issues for the ITER. Dust diagnostics can be divided into two groups: diagnostics of dust on surfaces and diagnostics of dust in plasma. Diagnostics from both groups are employed in contemporary tokamaks; new diagnostics suitable for ITER are also being developed and tested. Dust accumulation in ITER is likely to occur in hidden areas, e.g., between tiles and under divertor baffles. A novel electrostatic dust detector for monitoring dust in these regions has been developed and tested at PPPL. In the DIII-D tokamak dust diagnostics include Mie scattering from Nd:YAG lasers,more » visible imaging, and spectroscopy. Laser scattering is able to resolve particles between 0.16 and 1.6 {mu}m in diameter; using these data the total dust content in the edge plasmas and trends in the dust production rates within this size range have been established. Individual dust particles are observed by visible imaging using fast framing cameras, detecting dust particles of a few microns in diameter and larger. Dust velocities and trajectories can be determined in two-dimension with a single camera or three-dimension using multiple cameras, but determination of particle size is challenging. In order to calibrate diagnostics and benchmark dust dynamics modeling, precharacterized carbon dust has been injected into the lower divertor of DIII-D. Injected dust is seen by cameras, and spectroscopic diagnostics observe an increase in carbon line (CI, CII, C{sub 2} dimer) and thermal continuum emissions from the injected dust. The latter observation can be used in the design of novel dust survey diagnostics.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krause, Josua; Dasgupta, Aritra; Fekete, Jean-Daniel

    Dealing with the curse of dimensionality is a key challenge in high-dimensional data visualization. We present SeekAView to address three main gaps in the existing research literature. First, automated methods like dimensionality reduction or clustering suffer from a lack of transparency in letting analysts interact with their outputs in real-time to suit their exploration strategies. The results often suffer from a lack of interpretability, especially for domain experts not trained in statistics and machine learning. Second, exploratory visualization techniques like scatter plots or parallel coordinates suffer from a lack of visual scalability: it is difficult to present a coherent overviewmore » of interesting combinations of dimensions. Third, the existing techniques do not provide a flexible workflow that allows for multiple perspectives into the analysis process by automatically detecting and suggesting potentially interesting subspaces. In SeekAView we address these issues using suggestion based visual exploration of interesting patterns for building and refining multidimensional subspaces. Compared to the state-of-the-art in subspace search and visualization methods, we achieve higher transparency in showing not only the results of the algorithms, but also interesting dimensions calibrated against different metrics. We integrate a visually scalable design space with an iterative workflow guiding the analysts by choosing the starting points and letting them slice and dice through the data to find interesting subspaces and detect correlations, clusters, and outliers. We present two usage scenarios for demonstrating how SeekAView can be applied in real-world data analysis scenarios.« less

  5. Iterated Hamiltonian type systems and applications

    NASA Astrophysics Data System (ADS)

    Tiba, Dan

    2018-04-01

    We discuss, in arbitrary dimension, certain Hamiltonian type systems and prove existence, uniqueness and regularity properties, under the independence condition. We also investigate the critical case, define a class of generalized solutions and prove existence and basic properties. Relevant examples and counterexamples are also indicated. The applications concern representations of implicitly defined manifolds and their perturbations, motivated by differential systems involving unknown geometries.

  6. Electromagnetic scattering of large structures in layered earths using integral equations

    NASA Astrophysics Data System (ADS)

    Xiong, Zonghou; Tripp, Alan C.

    1995-07-01

    An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.

  7. On the self-similar solution to the Euler equations for an incompressible fluid in three dimensions

    NASA Astrophysics Data System (ADS)

    Pomeau, Yves

    2018-03-01

    The equations for a self-similar solution to an inviscid incompressible fluid are mapped into an integral equation that hopefully can be solved by iteration. It is argued that the exponents of the similarity are ruled by Kelvin's theorem of conservation of circulation. The end result is an iteration with a nonlinear term entering a kernel given by a 3D integral for a swirling flow, likely within reach of present-day computational power. Because of the slow decay of the similarity solution at large distances, its kinetic energy diverges, and some mathematical results excluding non-trivial solutions of the Euler equations in the self-similar case do not apply. xml:lang="fr"

  8. Self-consistent one dimension in space and three dimension in velocity kinetic trajectory simulation model of magnetized plasma-wall transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chalise, Roshan, E-mail: plasma.roshan@gmail.com; Khanal, Raju

    2015-11-15

    We have developed a self-consistent 1d3v (one dimension in space and three dimension in velocity) Kinetic Trajectory Simulation (KTS) model, which can be used for modeling various situations of interest and yields results of high accuracy. Exact ion trajectories are followed, to calculate along them the ion distribution function, assuming an arbitrary injection ion distribution. The electrons, on the other hand, are assumed to have a cut-off Maxwellian velocity distribution at injection and their density distribution is obtained analytically. Starting from an initial guess, the potential profile is iterated towards the final time-independent self-consistent state. We have used it tomore » study plasma sheath region formed in presence of an oblique magnetic field. Our results agree well with previous works from other models, and hence, we expect our 1d3v KTS model to provide a basis for the studying of all types of magnetized plasmas, yielding more accurate results.« less

  9. SU-E-I-04: Improving CT Quality for Radiation Therapy of Patients with High Body Mass Index Using Iterative Reconstruction Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noid, G; Tai, A; Li, X

    2015-06-15

    Purpose: Iterative reconstruction (IR) algorithms are developed to improve CT image quality (IQ) by reducing noise without diminishing spatial resolution or contrast. The CT IQ for patients with a high Body Mass Index (BMI) can suffer from increased noise due to photon starvation. The purpose of this study is to investigate and to quantify the IQ enhancement for high BMI patients through the application of IR algorithms. Methods: CT raw data collected for 6 radiotherapy (RT) patients with BMI, greater than or equal to 30 were retrospectively analyzed. All CT data were acquired using a CT scanner (Somaton Definition ASmore » Open, Siemens) installed in a linac room (CT-on-rails) using standard imaging protocols. The CT data were reconstructed using the Sinogram Affirmed Iterative Reconstruction (SAFIRE) and Filtered Back Projection (FBP) methods. IQ metrics of the obtained CTs were compared and correlated with patient depth and BMI. The patient depth was defined as the largest distance from anterior to posterior along the bilateral symmetry axis. Results: IR techniques are demonstrated to preserve contrast and reduce noise in comparison to traditional FBP. Driven by the reduction in noise, the contrast to noise ratio is roughly doubled by adopting the highest SAFIRE strength. A significant correlation was observed between patient depth and IR noise reduction through Pearson’s correlation test (R = 0.9429/P = 0.0167). The mean patient depth was 30.4 cm and the average relative noise reduction for the strongest iterative reconstruction was 55%. Conclusion: The IR techniques produce a measureable enhancement to CT IQ by reducing the noise. Dramatic noise reduction is evident for the high BMI patients. The improved CT IQ enables more accurate delineation of tumors and organs at risk and more accuarte dose calculations for RT planning and delivery guidance. Supported by Siemens.« less

  10. Impact of iterative metal artifact reduction on diagnostic image quality in patients with dental hardware.

    PubMed

    Weiß, Jakob; Schabel, Christoph; Bongers, Malte; Raupach, Rainer; Clasen, Stephan; Notohamiprodjo, Mike; Nikolaou, Konstantin; Bamberg, Fabian

    2017-03-01

    Background Metal artifacts often impair diagnostic accuracy in computed tomography (CT) imaging. Therefore, effective and workflow implemented metal artifact reduction algorithms are crucial to gain higher diagnostic image quality in patients with metallic hardware. Purpose To assess the clinical performance of a novel iterative metal artifact reduction (iMAR) algorithm for CT in patients with dental fillings. Material and Methods Thirty consecutive patients scheduled for CT imaging and dental fillings were included in the analysis. All patients underwent CT imaging using a second generation dual-source CT scanner (120 kV single-energy; 100/Sn140 kV in dual-energy, 219 mAs, gantry rotation time 0.28-1/s, collimation 0.6 mm) as part of their clinical work-up. Post-processing included standard kernel (B49) and an iterative MAR algorithm. Image quality and diagnostic value were assessed qualitatively (Likert scale) and quantitatively (HU ± SD) by two reviewers independently. Results All 30 patients were included in the analysis, with equal reconstruction times for iMAR and standard reconstruction (17 s ± 0.5 vs. 19 s ± 0.5; P > 0.05). Visual image quality was significantly higher for iMAR as compared with standard reconstruction (3.8 ± 0.5 vs. 2.6 ± 0.5; P < 0.0001, respectively) and showed improved evaluation of adjacent anatomical structures. Similarly, HU-based measurements of degree of artifacts were significantly lower in the iMAR reconstructions as compared with the standard reconstruction (0.9 ± 1.6 vs. -20 ± 47; P < 0.05, respectively). Conclusion The tested iterative, raw-data based reconstruction MAR algorithm allows for a significant reduction of metal artifacts and improved evaluation of adjacent anatomical structures in the head and neck area in patients with dental hardware.

  11. Alternative method for variable aspect ratio vias using a vortex mask

    NASA Astrophysics Data System (ADS)

    Schepis, Anthony R.; Levinson, Zac; Burbine, Andrew; Smith, Bruce W.

    2014-03-01

    Historically IC (integrated circuit) device scaling has bridged the gap between technology nodes. Device size reduction is enabled by increased pattern density, enhancing functionality and effectively reducing cost per chip. Exemplifying this trend are aggressive reductions in memory cell sizes that have resulted in systems with diminishing area between bit/word lines. This affords an even greater challenge in the patterning of contact level features that are inherently difficult to resolve because of their relatively small area and complex aerial image. To accommodate these trends, semiconductor device design has shifted toward the implementation of elliptical contact features. This empowers designers to maximize the use of free device space, preserving contact area and effectively reducing the via dimension just along a single axis. It is therefore critical to provide methods that enhance the resolving capacity of varying aspect ratio vias for implementation in electronic design systems. Vortex masks, characterized by their helically induced propagation of light and consequent dark core, afford great potential for the patterning of such features when coupled with a high resolution negative tone resist system. This study investigates the integration of a vortex mask in a 193nm immersion (193i) lithography system and qualifies its ability to augment aspect ratio through feature density using aerial image vector simulation. It was found that vortex fabricated vias provide a distinct resolution advantage over traditionally patterned contact features employing a 6% attenuated phase shift mask (APM). 1:1 features were resolvable at 110nm pitch with a 38nm critical dimension (CD) and 110nm depth of focus (DOF) at 10% exposure latitude (EL). Furthermore, iterative source-mask optimization was executed as means to augment aspect ratio. By employing mask asymmetries and directionally biased sources aspect ratios ranging between 1:1 and 2:1 were achievable, however, this range is ultimately dictated by pitch employed.

  12. A Non Local Electron Heat Transport Model for Multi-Dimensional Fluid Codes

    NASA Astrophysics Data System (ADS)

    Schurtz, Guy

    2000-10-01

    Apparent inhibition of thermal heat flow is one of the most ancient problems in computational Inertial Fusion and flux-limited Spitzer-Harm conduction has been a mainstay in multi-dimensional hydrodynamic codes for more than 25 years. Theoretical investigation of the problem indicates that heat transport in laser produced plasmas has to be considered as a non local process. Various authors contributed to the non local theory and proposed convolution formulas designed for practical implementation in one-dimensional fluid codes. Though the theory, confirmed by kinetic calculations, actually predicts a reduced heat flux, it fails to explain the very small limiters required in two-dimensional simulations. Fokker-Planck simulations by Epperlein, Rickard and Bell [PRL 61, 2453 (1988)] demonstrated that non local effects could lead to a strong reduction of heat flow in two dimensions, even in situations where a one-dimensional analysis suggests that the heat flow is nearly classical. We developed at CEA/DAM a non local electron heat transport model suitable for implementation in our two-dimensional radiation hydrodynamic code FCI2. This model may be envisionned as the first step of an iterative solution of the Fokker-Planck equations; it takes the mathematical form of multigroup diffusion equations, the solution of which yields both the heat flux and the departure of the electron distribution function to the Maxwellian. Although direct implementation of the model is straightforward, formal solutions of it can be expressed in convolution form, exhibiting a three-dimensional tensor propagator. Reduction to one dimension retrieves the original formula of Luciani, Mora and Virmont [PRL 51, 1664 (1983)]. Intense magnetic fields may be generated by thermal effects in laser targets; these fields, as well as non local effects, will inhibit electron conduction. We present simulations where both effects are taken into account and shortly discuss the coupling strategy between them.

  13. Dose reduction potential of iterative reconstruction algorithms in neck CTA-a simulation study.

    PubMed

    Ellmann, Stephan; Kammerer, Ferdinand; Allmendinger, Thomas; Brand, Michael; Janka, Rolf; Hammon, Matthias; Lell, Michael M; Uder, Michael; Kramer, Manuel

    2016-10-01

    This study aimed to determine the degree of radiation dose reduction in neck CT angiography (CTA) achievable with Sinogram-affirmed iterative reconstruction (SAFIRE) algorithms. 10 consecutive patients scheduled for neck CTA were included in this study. CTA images of the external carotid arteries either were reconstructed with filtered back projection (FBP) at full radiation dose level or underwent simulated dose reduction by proprietary reconstruction software. The dose-reduced images were reconstructed using either SAFIRE 3 or SAFIRE 5 and compared with full-dose FBP images in terms of vessel definition. 5 observers performed a total of 3000 pairwise comparisons. SAFIRE allowed substantial radiation dose reductions in neck CTA while maintaining vessel definition. The possible levels of radiation dose reduction ranged from approximately 34 to approximately 90% and depended on the SAFIRE algorithm strength and the size of the vessel of interest. In general, larger vessels permitted higher degrees of radiation dose reduction, especially with higher SAFIRE strength levels. With small vessels, the superiority of SAFIRE 5 over SAFIRE 3 was lost. Neck CTA can be performed with substantially less radiation dose when SAFIRE is applied. The exact degree of radiation dose reduction should be adapted to the clinical question, in particular to the smallest vessel needing excellent definition.

  14. Restoration of dimensional reduction in the random-field Ising model at five dimensions

    NASA Astrophysics Data System (ADS)

    Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas

    2017-04-01

    The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D <6 to their values in the pure Ising model at D -2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.

  15. Restoration of dimensional reduction in the random-field Ising model at five dimensions.

    PubMed

    Fytas, Nikolaos G; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas

    2017-04-01

    The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D-2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D=5. We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3≤D<6 to their values in the pure Ising model at D-2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.

  16. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  17. Prediction With Dimension Reduction of Multiple Molecular Data Sources for Patient Survival.

    PubMed

    Kaplan, Adam; Lock, Eric F

    2017-01-01

    Predictive modeling from high-dimensional genomic data is often preceded by a dimension reduction step, such as principal component analysis (PCA). However, the application of PCA is not straightforward for multisource data, wherein multiple sources of 'omics data measure different but related biological components. In this article, we use recent advances in the dimension reduction of multisource data for predictive modeling. In particular, we apply exploratory results from Joint and Individual Variation Explained (JIVE), an extension of PCA for multisource data, for prediction of differing response types. We conduct illustrative simulations to illustrate the practical advantages and interpretability of our approach. As an application example, we consider predicting survival for patients with glioblastoma multiforme from 3 data sources measuring messenger RNA expression, microRNA expression, and DNA methylation. We also introduce a method to estimate JIVE scores for new samples that were not used in the initial dimension reduction and study its theoretical properties; this method is implemented in the R package R.JIVE on CRAN, in the function jive.predict.

  18. Using Minimum-Surface Bodies for Iteration Space Partitioning

    NASA Technical Reports Server (NTRS)

    Frumlin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A number of known techniques for improving cache performance in scientific computations involve the reordering of the iteration space. Some of these reorderings can be considered as coverings of the iteration space with the sets having good surface-to-volume ratio. Use of such sets reduces the number of cache misses in computations of local operators having the iteration space as a domain. We study coverings of iteration spaces represented by structured and unstructured grids. For structured grids we introduce a covering based on successive minima tiles of the interference lattice of the grid. We show that the covering has good surface-to-volume ratio and present a computer experiment showing actual reduction of the cache misses achieved by using these tiles. For unstructured grids no cache efficient covering can be guaranteed. We present a triangulation of a 3-dimensional cube such that any local operator on the corresponding grid has significantly larger number of cache misses than a similar operator on a structured grid.

  19. Classification of small lesions on dynamic breast MRI: Integrating dimension reduction and out-of-sample extension into CADx methodology

    PubMed Central

    Nagarajan, Mahesh B.; Huber, Markus B.; Schlossbauer, Thomas; Leinsinger, Gerda; Krol, Andrzej; Wismüller, Axel

    2014-01-01

    Objective While dimension reduction has been previously explored in computer aided diagnosis (CADx) as an alternative to feature selection, previous implementations of its integration into CADx do not ensure strict separation between training and test data required for the machine learning task. This compromises the integrity of the independent test set, which serves as the basis for evaluating classifier performance. Methods and Materials We propose, implement and evaluate an improved CADx methodology where strict separation is maintained. This is achieved by subjecting the training data alone to dimension reduction; the test data is subsequently processed with out-of-sample extension methods. Our approach is demonstrated in the research context of classifying small diagnostically challenging lesions annotated on dynamic breast magnetic resonance imaging (MRI) studies. The lesions were dynamically characterized through topological feature vectors derived from Minkowski functionals. These feature vectors were then subject to dimension reduction with different linear and non-linear algorithms applied in conjunction with out-of-sample extension techniques. This was followed by classification through supervised learning with support vector regression. Area under the receiver-operating characteristic curve (AUC) was evaluated as the metric of classifier performance. Results Of the feature vectors investigated, the best performance was observed with Minkowski functional ’perimeter’ while comparable performance was observed with ’area’. Of the dimension reduction algorithms tested with ’perimeter’, the best performance was observed with Sammon’s mapping (0.84 ± 0.10) while comparable performance was achieved with exploratory observation machine (0.82 ± 0.09) and principal component analysis (0.80 ± 0.10). Conclusions The results reported in this study with the proposed CADx methodology present a significant improvement over previous results reported with such small lesions on dynamic breast MRI. In particular, non-linear algorithms for dimension reduction exhibited better classification performance than linear approaches, when integrated into our CADx methodology. We also note that while dimension reduction techniques may not necessarily provide an improvement in classification performance over feature selection, they do allow for a higher degree of feature compaction. PMID:24355697

  20. Reduction of asymmetric wall force in ITER disruptions with fast current quench

    NASA Astrophysics Data System (ADS)

    Strauss, H.

    2018-02-01

    One of the problems caused by disruptions in tokamaks is the asymmetric electromechanical force produced in conducting structures surrounding the plasma. The asymmetric wall force in ITER asymmetric vertical displacement event (AVDE) disruptions is calculated in nonlinear 3D MHD simulations. It is found that the wall force can vary by almost an order of magnitude, depending on the ratio of the current quench time to the resistive wall magnetic penetration time. In ITER, this ratio is relatively low, resulting in a low asymmetric wall force. In JET, this ratio is relatively high, resulting in a high asymmetric wall force. Previous extrapolations based on JET measurements have greatly overestimated the ITER wall force. It is shown that there are two limiting regimes of AVDEs, and it is explained why the asymmetric wall force is different in the two limits.

  1. An estimating equation approach to dimension reduction for longitudinal data

    PubMed Central

    Xu, Kelin; Guo, Wensheng; Xiong, Momiao; Zhu, Liping; Jin, Li

    2016-01-01

    Sufficient dimension reduction has been extensively explored in the context of independent and identically distributed data. In this article we generalize sufficient dimension reduction to longitudinal data and propose an estimating equation approach to estimating the central mean subspace. The proposed method accounts for the covariance structure within each subject and improves estimation efficiency when the covariance structure is correctly specified. Even if the covariance structure is misspecified, our estimator remains consistent. In addition, our method relaxes distributional assumptions on the covariates and is doubly robust. To determine the structural dimension of the central mean subspace, we propose a Bayesian-type information criterion. We show that the estimated structural dimension is consistent and that the estimated basis directions are root-\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$n$\\end{document} consistent, asymptotically normal and locally efficient. Simulations and an analysis of the Framingham Heart Study data confirm the effectiveness of our approach. PMID:27017956

  2. Iterated Gate Teleportation and Blind Quantum Computation.

    PubMed

    Pérez-Delgado, Carlos A; Fitzsimons, Joseph F

    2015-06-05

    Blind quantum computation allows a user to delegate a computation to an untrusted server while keeping the computation hidden. A number of recent works have sought to establish bounds on the communication requirements necessary to implement blind computation, and a bound based on the no-programming theorem of Nielsen and Chuang has emerged as a natural limiting factor. Here we show that this constraint only holds in limited scenarios, and show how to overcome it using a novel method of iterated gate teleportations. This technique enables drastic reductions in the communication required for distributed quantum protocols, extending beyond the blind computation setting. Applied to blind quantum computation, this technique offers significant efficiency improvements, and in some scenarios offers an exponential reduction in communication requirements.

  3. Iterative methods for dose reduction and image enhancement in tomography

    DOEpatents

    Miao, Jianwei; Fahimian, Benjamin Pooya

    2012-09-18

    A system and method for creating a three dimensional cross sectional image of an object by the reconstruction of its projections that have been iteratively refined through modification in object space and Fourier space is disclosed. The invention provides systems and methods for use with any tomographic imaging system that reconstructs an object from its projections. In one embodiment, the invention presents a method to eliminate interpolations present in conventional tomography. The method has been experimentally shown to provide higher resolution and improved image quality parameters over existing approaches. A primary benefit of the method is radiation dose reduction since the invention can produce an image of a desired quality with a fewer number projections than seen with conventional methods.

  4. Computational methods of robust controller design for aerodynamic flutter suppression

    NASA Technical Reports Server (NTRS)

    Anderson, L. R.

    1981-01-01

    The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.

  5. Reduction of Free Edge Peeling Stress of Laminated Composites Using Active Piezoelectric Layers

    PubMed Central

    Huang, Bin; Kim, Heung Soo

    2014-01-01

    An analytical approach is proposed in the reduction of free edge peeling stresses of laminated composites using active piezoelectric layers. The approach is the extended Kantorovich method which is an iterative method. Multiterms of trial function are employed and governing equations are derived by taking the principle of complementary virtual work. The solutions are obtained by solving a generalized eigenvalue problem. By this approach, the stresses automatically satisfy not only the traction-free boundary conditions, but also the free edge boundary conditions. Through the iteration processes, the free edge stresses converge very quickly. It is found that the peeling stresses generated by mechanical loadings are significantly reduced by applying a proper electric field to the piezoelectric actuators. PMID:25025088

  6. Association of Focal Radiation Dose Adjusted on Cross Sections with Subsolid Nodule Visibility and Quantification on Computed Tomography Images Using AIDR 3D: Comparison Among Scanning at 84, 42, and 7 mAs.

    PubMed

    Nagatani, Yukihiro; Moriya, Hiroshi; Noma, Satoshi; Sato, Shigetaka; Tsukagoshi, Shinsuke; Yamashiro, Tsuneo; Koyama, Mitsuhiro; Tomiyama, Noriyuki; Ono, Yoshiharu; Murayama, Sadayuki; Murata, Kiyoshi

    2018-05-04

    The objectives of this study were to compare the visibility and quantification of subsolid nodules (SSNs) on computed tomography (CT) using adaptive iterative dose reduction using three-dimensional processing between 7 and 42 mAs and to assess the association of size-specific dose estimate (SSDE) with relative measured value change between 7 and 84 mAs (RMVC 7-84 ) and relative measured value change between 42 and 84 mAs (RMVC 42-84 ). As a Japanese multicenter research project (Area-detector Computed Tomography for the Investigation of Thoracic Diseases [ACTIve] study), 50 subjects underwent chest CT with 120 kV, 0.35 second per location and three tube currents: 240 mA (84 mAs), 120 mA (42 mAs), and 20 mA (7 mAs). Axial CT images were reconstructed using adaptive iterative dose reduction using three-dimensional processing. SSN visibility was assessed with three grades (1, obscure, to 3, definitely visible) using CT at 84 mAs as reference standard and compared between 7 and 42 mAs using t test. Dimension, mean CT density, and particular SSDE to the nodular center of 71 SSNs and volume of 58 SSNs (diameter >5 mm) were measured. Measured values (MVs) were compared using Wilcoxon signed-rank tests among CTs at three doses. Pearson correlation analyses were performed to assess the association of SSDE with RMVC 7-84 : 100 × (MV at 7 mAs - MV at 84 mAs)/MV at 84 mAs and RMVC 42-84 . SSN visibilities were similar between 7 and 42 mAs (2.76 ± 0.45 vs 2.78 ± 0.40) (P = .67). For larger SSNs (>8 mm), MVs were similar among CTs at three doses (P > .05). For smaller SSNs (<8 mm), dimensions and volumes on CT at 7 mAs were larger and the mean CT density was smaller than 42 and 84 mAs, and SSDE had mild negative correlations with RMVC 7-84 (P < .05). Comparable quantification was demonstrated irrespective of doses for larger SSNs. For smaller SSNs, nodular exaggerating effect associated with decreased SSDE on CT at 7 mAs compared to 84 mAs could result in comparable visibilities to CT at 42 mAs. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  7. Two-level Schwartz methods for nonconforming finite elements and discontinuous coefficients

    NASA Technical Reports Server (NTRS)

    Sarkis, Marcus

    1993-01-01

    Two-level domain decomposition methods are developed for a simple nonconforming approximation of second order elliptic problems. A bound is established for the condition number of these iterative methods, which grows only logarithmically with the number of degrees of freedom in each subregion. This bound holds for two and three dimensions and is independent of jumps in the value of the coefficients.

  8. Progressing in cable-in-conduit for fusion magnets: from ITER to low cost, high performance DEMO

    NASA Astrophysics Data System (ADS)

    Uglietti, D.; Sedlak, K.; Wesche, R.; Bruzzone, P.; Muzzi, L.; della Corte, A.

    2018-05-01

    The performance of ITER toroidal field (TF) conductors still have a significant margin for improvement because the effective strain between ‑0.62% and ‑0.95% limits the strands’ critical current between 15% and 45% of the maximum achievable. Prototype Nb3Sn cable-in-conduit conductors have been designed, manufactured and tested in the frame of the EUROfusion DEMO activities. In these conductors the effective strain has shown a clear improvement with respect to the ITER conductors, reaching values between ‑0.55% and ‑0.28%, resulting in a strand critical current which is two to three times higher than in ITER conductors. In terms of the amount of Nb3Sn strand required for the construction of the DEMO TF magnet system, such improvement may lead to a reduction of at least a factor of two with respect to a similar magnet built with ITER type conductors; a further saving of Nb3Sn is possible if graded conductors/windings are employed. In the best case the DEMO TF magnet could require fewer Nb3Sn strands than the ITER one, despite the larger size of DEMO. Moreover high performance conductors could be operated at higher fields than ITER TF conductors, enabling the construction of low cost, compact, high field tokamaks.

  9. New Trends in Television Consumption.

    ERIC Educational Resources Information Center

    Richeri, Giuseppe

    A phenomenon which tends to transform the function and methods of traditional television consumption is the gradual reduction of its "mass" dimensions, which tend to disappear for an increasing share of the audience. This reduction of the mass dimension ranges from fragmentation of the audience to its segmentation, and, in the most…

  10. Secondary-structure matching (SSM), a new tool for fast protein structure alignment in three dimensions.

    PubMed

    Krissinel, E; Henrick, K

    2004-12-01

    The present paper describes the SSM algorithm of protein structure comparison in three dimensions, which includes an original procedure of matching graphs built on the protein's secondary-structure elements, followed by an iterative three-dimensional alignment of protein backbone Calpha atoms. The SSM results are compared with those obtained from other protein comparison servers, and the advantages and disadvantages of different scores that are used for structure recognition are discussed. A new score, balancing the r.m.s.d. and alignment length Nalign, is proposed. It is found that different servers agree reasonably well on the new score, while showing considerable differences in r.m.s.d. and Nalign.

  11. Systems of Inhomogeneous Linear Equations

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.

  12. Fractal dimension of microbead assemblies used for protein detection.

    PubMed

    Hecht, Ariel; Commiskey, Patrick; Lazaridis, Filippos; Argyrakis, Panos; Kopelman, Raoul

    2014-11-10

    We use fractal analysis to calculate the protein concentration in a rotating magnetic assembly of microbeads of size 1 μm, which has optimized parameters of sedimentation, binding sites and magnetic volume. We utilize the original Forrest-Witten method, but due to the relatively small number of bead particles, which is of the order of 500, we use a large number of origins and also a large number of algorithm iterations. We find a value of the fractal dimension in the range 1.70-1.90, as a function of the thrombin concentration, which plays the role of binding the microbeads together. This is in good agreement with previous results from magnetorotation studies. The calculation of the fractal dimension using multiple points of reference can be used for any assembly with a relatively small number of particles. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Qualitative approaches to use of the RE-AIM framework: rationale and methods.

    PubMed

    Holtrop, Jodi Summers; Rabin, Borsika A; Glasgow, Russell E

    2018-03-13

    There have been over 430 publications using the RE-AIM model for planning and evaluation of health programs and policies, as well as numerous applications of the model in grant proposals and national programs. Full use of the model includes use of qualitative methods to understand why and how results were obtained on different RE-AIM dimensions, however, recent reviews have revealed that qualitative methods have been used infrequently. Having quantitative and qualitative methods and results iteratively inform each other should enhance understanding and lessons learned. Because there have been few published examples of qualitative approaches and methods using RE-AIM for planning or assessment and no guidance on how qualitative approaches can inform these processes, we provide guidance on qualitative methods to address the RE-AIM model and its various dimensions. The intended audience is researchers interested in applying RE-AIM or similar implementation models, but the methods discussed should also be relevant to those in community or clinical settings. We present directions for, examples of, and guidance on how qualitative methods can be used to address each of the five RE-AIM dimensions. Formative qualitative methods can be helpful in planning interventions and designing for dissemination. Summative qualitative methods are useful when used in an iterative, mixed methods approach for understanding how and why different patterns of results occur. In summary, qualitative and mixed methods approaches to RE-AIM help understand complex situations and results, why and how outcomes were obtained, and contextual factors not easily assessed using quantitative measures.

  14. Kaluza-Klein cosmology from five-dimensional Lovelock-Cartan theory

    NASA Astrophysics Data System (ADS)

    Castillo-Felisola, Oscar; Corral, Cristóbal; del Pino, Simón; Ramírez, Francisca

    2016-12-01

    We study the Kaluza-Klein dimensional reduction of the Lovelock-Cartan theory in five-dimensional spacetime, with a compact dimension of S1 topology. We find cosmological solutions of the Friedmann-Robertson-Walker class in the reduced spacetime. The torsion and the fields arising from the dimensional reduction induce a nonvanishing energy-momentum tensor in four dimensions. We find solutions describing expanding, contracting, and bouncing universes. The model shows a dynamical compactification of the extra dimension in some regions of the parameter space.

  15. Model-based iterative reconstruction in low-dose CT colonography-feasibility study in 65 patients for symptomatic investigation.

    PubMed

    Vardhanabhuti, Varut; James, Julia; Nensey, Rehaan; Hyde, Christopher; Roobottom, Carl

    2015-05-01

    To compare image quality on computed tomographic colonography (CTC) acquired at standard dose (STD) and low dose (LD) using filtered-back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) techniques. A total of 65 symptomatic patients were prospectively enrolled for the study and underwent STD and LD CTC with filtered-back projection, adaptive statistical iterative reconstruction, and MBIR to allow direct per-patient comparison. Objective image noise, subjective image analyses, and polyp detection were assessed. Objective image noise analysis demonstrates significant noise reduction using MBIR technique (P < .05) despite being acquired at lower doses. Subjective image analyses were superior for LD MBIR in all parameters except visibility of extracolonic lesions (two-dimensional) and visibility of colonic wall (three-dimensional) where there were no significant differences. There was no significant difference in polyp detection rates (P > .05). Doses: LD (dose-length product, 257.7), STD (dose-length product, 483.6). LD MBIR CTC objectively shows improved image noise using parameters in our study. Subjectively, image quality is maintained. Polyp detection shows no significant difference but because of small numbers needs further validation. Average dose reduction of 47% can be achieved. This study confirms feasibility of using MBIR in this context of CTC in symptomatic population. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  16. Error analysis in inverse scatterometry. I. Modeling.

    PubMed

    Al-Assaad, Rayan M; Byrne, Dale M

    2007-02-01

    Scatterometry is an optical technique that has been studied and tested in recent years in semiconductor fabrication metrology for critical dimensions. Previous work presented an iterative linearized method to retrieve surface-relief profile parameters from reflectance measurements upon diffraction. With the iterative linear solution model in this work, rigorous models are developed to represent the random and deterministic or offset errors in scatterometric measurements. The propagation of different types of error from the measurement data to the profile parameter estimates is then presented. The improvement in solution accuracies is then demonstrated with theoretical and experimental data by adjusting for the offset errors. In a companion paper (in process) an improved optimization method is presented to account for unknown offset errors in the measurements based on the offset error model.

  17. A three-dimensional wide-angle BPM for optical waveguide structures.

    PubMed

    Ma, Changbao; Van Keuren, Edward

    2007-01-22

    Algorithms for effective modeling of optical propagation in three- dimensional waveguide structures are critical for the design of photonic devices. We present a three-dimensional (3-D) wide-angle beam propagation method (WA-BPM) using Hoekstra's scheme. A sparse matrix algebraic equation is formed and solved using iterative methods. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation, along with a technique for shifting the simulation window to reduce the dimension of the numerical equation and a threshold technique to further ensure its convergence. These techniques can ensure the implementation of iterative methods for waveguide structures by relaxing the convergence problem, which will further enable us to develop higher-order 3-D WA-BPMs based on Padé approximant operators.

  18. A Navier-Stokes solution of the three-dimensional viscous compressible flow in a centrifugal compressor impeller

    NASA Technical Reports Server (NTRS)

    Harp, J. L., Jr.

    1977-01-01

    A two-dimensional time-dependent computer code was utilized to calculate the three-dimensional steady flow within the impeller blading. The numerical method is an explicit time marching scheme in two spatial dimensions. Initially, an inviscid solution is generated on the hub blade-to-blade surface by the method of Katsanis and McNally (1973). Starting with the known inviscid solution, the viscous effects are calculated through iteration. The approach makes it possible to take into account principal impeller fluid-mechanical effects. It is pointed out that the second iterate provides a complete solution to the three-dimensional, compressible, Navier-Stokes equations for flow in a centrifugal impeller. The problems investigated are related to the study of a radial impeller and a backswept impeller.

  19. A three-dimensional wide-angle BPM for optical waveguide structures

    NASA Astrophysics Data System (ADS)

    Ma, Changbao; van Keuren, Edward

    2007-01-01

    Algorithms for effective modeling of optical propagation in three- dimensional waveguide structures are critical for the design of photonic devices. We present a three-dimensional (3-D) wide-angle beam propagation method (WA-BPM) using Hoekstra’s scheme. A sparse matrix algebraic equation is formed and solved using iterative methods. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation, along with a technique for shifting the simulation window to reduce the dimension of the numerical equation and a threshold technique to further ensure its convergence. These techniques can ensure the implementation of iterative methods for waveguide structures by relaxing the convergence problem, which will further enable us to develop higher-order 3-D WA-BPMs based on Padé approximant operators.

  20. Final case for a stainless steel diagnostic first wall on ITER

    NASA Astrophysics Data System (ADS)

    Pitts, R. A.; Bazylev, B.; Linke, J.; Landman, I.; Lehnen, M.; Loesser, D.; Loewenhoff, Th.; Merola, M.; Roccella, R.; Saibene, G.; Smith, M.; Udintsev, V. S.

    2015-08-01

    In 2010 the ITER Organization (IO) proposed to eliminate the beryllium armour on the plasma-facing surface of the diagnostic port plugs and instead to use bare stainless steel (SS), simplifying the design and providing significant cost reduction. Transport simulations at the IO confirmed that charge-exchange sputtering of the SS surfaces would not affect burning plasma operation through core impurity contamination, but a second key issue is the potential melt damage/material loss inflicted by the intense photon radiation flashes expected at the thermal quench of disruptions mitigated by massive gas injection. This paper addresses this second issue through a combination of ITER relevant experimental heat load tests and qualitative theoretical arguments of melt layer stability. It demonstrates that SS can be employed as material for the port plug plasma-facing surface and this has now been adopted into the ITER baseline.

  1. Low-memory iterative density fitting.

    PubMed

    Grajciar, Lukáš

    2015-07-30

    A new low-memory modification of the density fitting approximation based on a combination of a continuous fast multipole method (CFMM) and a preconditioned conjugate gradient solver is presented. Iterative conjugate gradient solver uses preconditioners formed from blocks of the Coulomb metric matrix that decrease the number of iterations needed for convergence by up to one order of magnitude. The matrix-vector products needed within the iterative algorithm are calculated using CFMM, which evaluates them with the linear scaling memory requirements only. Compared with the standard density fitting implementation, up to 15-fold reduction of the memory requirements is achieved for the most efficient preconditioner at a cost of only 25% increase in computational time. The potential of the method is demonstrated by performing density functional theory calculations for zeolite fragment with 2592 atoms and 121,248 auxiliary basis functions on a single 12-core CPU workstation. © 2015 Wiley Periodicals, Inc.

  2. Wavelet packets for multi- and hyper-spectral imagery

    NASA Astrophysics Data System (ADS)

    Benedetto, J. J.; Czaja, W.; Ehler, M.; Flake, C.; Hirn, M.

    2010-01-01

    State of the art dimension reduction and classification schemes in multi- and hyper-spectral imaging rely primarily on the information contained in the spectral component. To better capture the joint spatial and spectral data distribution we combine the Wavelet Packet Transform with the linear dimension reduction method of Principal Component Analysis. Each spectral band is decomposed by means of the Wavelet Packet Transform and we consider a joint entropy across all the spectral bands as a tool to exploit the spatial information. Dimension reduction is then applied to the Wavelet Packets coefficients. We present examples of this technique for hyper-spectral satellite imaging. We also investigate the role of various shrinkage techniques to model non-linearity in our approach.

  3. Ultralow-dose computed tomography imaging for surgery of midfacial and orbital fractures using ASIR and MBIR.

    PubMed

    Widmann, G; Dalla Torre, D; Hoermann, R; Schullian, P; Gassner, E M; Bale, R; Puelacher, W

    2015-04-01

    The influence of dose reductions on diagnostic quality using a series of high-resolution ultralow-dose computed tomography (CT) scans for computer-assisted planning and surgery including the most recent iterative reconstruction algorithms was evaluated and compared with the fracture detectability of a standard cranial emergency protocol. A human cadaver head including the mandible was artificially prepared with midfacial and orbital fractures and scanned using a 64-multislice CT scanner. The CT dose index volume (CTDIvol) and effective doses were calculated using application software. Noise was evaluated as the standard deviation in Hounsfield units within an identical region of interest in the posterior fossa. Diagnostic quality was assessed by consensus reading of a craniomaxillofacial surgeon and radiologist. Compared with the emergency protocol at CTDIvol 35.3 mGy and effective dose 3.6 mSv, low-dose protocols down to CTDIvol 1.0 mGy and 0.1 mSv (97% dose reduction) may be sufficient for the diagnosis of dislocated craniofacial fractures. Non-dislocated fractures may be detected at CTDIvol 2.6 mGy and 0.3 mSv (93% dose reduction). Adaptive statistical iterative reconstruction (ASIR) 50 and 100 reduced average noise by 30% and 56%, and model-based iterative reconstruction (MBIR) by 93%. However, the detection rate of fractures could not be improved due to smoothing effects. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  4. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  5. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS)

    NASA Astrophysics Data System (ADS)

    Park, Suhyung; Park, Jaeseok

    2015-05-01

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  6. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS).

    PubMed

    Park, Suhyung; Park, Jaeseok

    2015-05-07

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  7. Plasma-surface interaction in the Be/W environment: Conclusions drawn from the JET-ILW for ITER

    NASA Astrophysics Data System (ADS)

    Brezinsek, S.; JET-EFDA contributors

    2015-08-01

    The JET ITER-Like Wall experiment (JET-ILW) provides an ideal test bed to investigate plasma-surface interaction (PSI) and plasma operation with the ITER plasma-facing material selection employing beryllium in the main chamber and tungsten in the divertor. The main PSI processes: material erosion and migration, (b) fuel recycling and retention, (c) impurity concentration and radiation have be1en studied and compared between JET-C and JET-ILW. The current physics understanding of these key processes in the JET-ILW revealed that both interpretation of previously obtained carbon results (JET-C) and predictions to ITER need to be revisited. The impact of the first-wall material on the plasma was underestimated. Main observations are: (a) low primary erosion source in H-mode plasmas and reduction of the material migration from the main chamber to the divertor (factor 7) as well as within the divertor from plasma-facing to remote areas (factor 30 - 50). The energetic threshold for beryllium sputtering minimises the primary erosion source and inhibits multi-step re-erosion in the divertor. The physical sputtering yield of tungsten is low as 10-5 and determined by beryllium ions. (b) Reduction of the long-term fuel retention (factor 10 - 20) in JET-ILW with respect to JET-C. The remaining retention is caused by implantation and co-deposition with beryllium and residual impurities. Outgassing has gained importance and impacts on the recycling properties of beryllium and tungsten. (c) The low effective plasma charge (Zeff = 1.2) and low radiation capability of beryllium reveal the bare deuterium plasma physics. Moderate nitrogen seeding, reaching Zeff = 1.6 , restores in particular the confinement and the L-H threshold behaviour. ITER-compatible divertor conditions with stable semi-detachment were obtained owing to a higher density limit with ILW. Overall JET demonstrated successful plasma operation in the Be/W material combination and confirms its advantageous PSI behaviour and gives strong support to the ITER material selection.

  8. Dimension reduction method for SPH equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2011-08-26

    Smoothed Particle Hydrodynamics model of a complex multiscale processe often results in a system of ODEs with an enormous number of unknowns. Furthermore, a time integration of the SPH equations usually requires time steps that are smaller than the observation time by many orders of magnitude. A direct solution of these ODEs can be extremely expensive. Here we propose a novel dimension reduction method that gives an approximate solution of the SPH ODEs and provides an accurate prediction of the average behavior of the modeled system. The method consists of two main elements. First, effective equationss for evolution of averagemore » variables (e.g. average velocity, concentration and mass of a mineral precipitate) are obtained by averaging the SPH ODEs over the entire computational domain. These effective ODEs contain non-local terms in the form of volume integrals of functions of the SPH variables. Second, a computational closure is used to close the system of the effective equations. The computational closure is achieved via short bursts of the SPH model. The dimension reduction model is used to simulate flow and transport with mixing controlled reactions and mineral precipitation. An SPH model is used model transport at the porescale. Good agreement between direct solutions of the SPH equations and solutions obtained with the dimension reduction method for different boundary conditions confirms the accuracy and computational efficiency of the dimension reduction model. The method significantly accelerates SPH simulations, while providing accurate approximation of the solution and accurate prediction of the average behavior of the system.« less

  9. Models of Intracavity Frequency Doubled Lasers

    DTIC Science & Technology

    1990-01-01

    Intermittency; Intermittency Theory; Entropies and Dimension with Intermittency; Resonances, Frobenius - Perron Operators and Power Spectra; and Scaling and...to finding a measure is to approximate the Frobenius - Perron operator, whose domain is the set of measures on M (see, e.g., Li, 1976). An invariant...measure of the system is a fixed point of the Frobenius - Perron operator, and an iterative method using this operator can be shown to converge to an

  10. Combining Static Analysis and Model Checking for Software Analysis

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2003-01-01

    We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.

  11. Integrated Model Reduction and Control of Aircraft with Flexible Wings

    NASA Technical Reports Server (NTRS)

    Swei, Sean Shan-Min; Zhu, Guoming G.; Nguyen, Nhan T.

    2013-01-01

    This paper presents an integrated approach to the modeling and control of aircraft with exible wings. The coupled aircraft rigid body dynamics with a high-order elastic wing model can be represented in a nite dimensional state-space form. Given a set of desired output covariance, a model reduction process is performed by using the weighted Modal Cost Analysis (MCA). A dynamic output feedback controller, which is designed based on the reduced-order model, is developed by utilizing output covariance constraint (OCC) algorithm, and the resulting OCC design weighting matrix is used for the next iteration of the weighted cost analysis. This controller is then validated for full-order evaluation model to ensure that the aircraft's handling qualities are met and the uttering motion of the wings suppressed. An iterative algorithm is developed in CONDUIT environment to realize the integration of model reduction and controller design. The proposed integrated approach is applied to NASA Generic Transport Model (GTM) for demonstration.

  12. Polynomic nonlinear dynamical systems - A residual sensitivity method for model reduction

    NASA Technical Reports Server (NTRS)

    Yurkovich, S.; Bugajski, D.; Sain, M.

    1985-01-01

    The motivation for using polynomic combinations of system states and inputs to model nonlinear dynamics systems is founded upon the classical theories of analysis and function representation. A feature of such representations is the need to make available all possible monomials in these variables, up to the degree specified, so as to provide for the description of widely varying functions within a broad class. For a particular application, however, certain monomials may be quite superfluous. This paper examines the possibility of removing monomials from the model in accordance with the level of sensitivity displayed by the residuals to their absence. Critical in these studies is the effect of system input excitation, and the effect of discarding monomial terms, upon the model parameter set. Therefore, model reduction is approached iteratively, with inputs redesigned at each iteration to ensure sufficient excitation of remaining monomials for parameter approximation. Examples are reported to illustrate the performance of such model reduction approaches.

  13. Modelling of edge localised modes and edge localised mode control [Modelling of ELMs and ELM control

    DOE PAGES

    Huijsmans, G. T. A.; Chang, C. S.; Ferraro, N.; ...

    2015-02-07

    Edge Localised Modes (ELMs) in ITER Q = 10 H-mode plasmas are likely to lead to large transient heat loads to the divertor. In order to avoid an ELM induced reduction of the divertor lifetime, the large ELM energy losses need to be controlled. In ITER, ELM control is foreseen using magnetic field perturbations created by in-vessel coils and the injection of small D2 pellets. ITER plasmas are characterised by low collisionality at a high density (high fraction of the Greenwald density limit). These parameters cannot simultaneously be achieved in current experiments. Thus, the extrapolation of the ELM properties andmore » the requirements for ELM control in ITER relies on the development of validated physics models and numerical simulations. Here, we describe the modelling of ELMs and ELM control methods in ITER. The aim of this paper is not a complete review on the subject of ELM and ELM control modelling but rather to describe the current status and discuss open issues.« less

  14. Improvements to image quality using hybrid and model-based iterative reconstructions: a phantom study.

    PubMed

    Aurumskjöld, Marie-Louise; Ydström, Kristina; Tingberg, Anders; Söderberg, Marcus

    2017-01-01

    The number of computed tomography (CT) examinations is increasing and leading to an increase in total patient exposure. It is therefore important to optimize CT scan imaging conditions in order to reduce the radiation dose. The introduction of iterative reconstruction methods has enabled an improvement in image quality and a reduction in radiation dose. To investigate how image quality depends on reconstruction method and to discuss patient dose reduction resulting from the use of hybrid and model-based iterative reconstruction. An image quality phantom (Catphan® 600) and an anthropomorphic torso phantom were examined on a Philips Brilliance iCT. The image quality was evaluated in terms of CT numbers, noise, noise power spectra (NPS), contrast-to-noise ratio (CNR), low-contrast resolution, and spatial resolution for different scan parameters and dose levels. The images were reconstructed using filtered back projection (FBP) and different settings of hybrid (iDose 4 ) and model-based (IMR) iterative reconstruction methods. iDose 4 decreased the noise by 15-45% compared with FBP depending on the level of iDose 4 . The IMR reduced the noise even further, by 60-75% compared to FBP. The results are independent of dose. The NPS showed changes in the noise distribution for different reconstruction methods. The low-contrast resolution and CNR were improved with iDose 4 , and the improvement was even greater with IMR. There is great potential to reduce noise and thereby improve image quality by using hybrid or, in particular, model-based iterative reconstruction methods, or to lower radiation dose and maintain image quality. © The Foundation Acta Radiologica 2016.

  15. Characterizing the orthodontic patient's purchase decision: A novel approach using netnography.

    PubMed

    Pittman, Joseph W; Bennett, M Elizabeth; Koroluk, Lorne D; Robinson, Stacey G; Phillips, Ceib L

    2017-06-01

    A deeper and more thorough characterization of why patients do or do not seek orthodontic treatment is needed for effective shared decision making about receiving treatment. Previous orthodontic qualitative research has identified important dimensions that influence treatment decisions, but our understanding of patients' decisions and how they interpret benefits and barriers of treatment are lacking. The objectives of this study were to expand our current list of decision-making dimensions and to create a conceptual framework to describe the decision-making process. Discussion boards, rich in orthodontic decision-making data, were identified and analyzed with qualitative methods. An iterative process of data collection, dimension identification, and dimension refinement were performed to saturation. A conceptual framework was created to describe the decision-making process. Fifty-four dimensions captured the ideas discussed in regard to a patient's decision to receive orthodontic treatment. Ten domains were identified: function, esthetics, psychosocial benefits, diagnosis, finances, inconveniences, risks of treatment, individual aspects, societal attitudes, and child-specific influences, each containing specific descriptive and conceptual dimensions. A person's desires, self-perceptions, and viewpoints, the public's views on esthetics and orthodontics, and parenting philosophies impacted perceptions of benefits and barriers associated with orthodontic treatment. We identified an expanded list of dimensions, created a conceptual framework describing the orthodontic patient's decision-making process, and identified dimensions associated with yes and no decisions, giving doctors a better understanding of patient attitudes and expectations. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  16. Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach

    NASA Astrophysics Data System (ADS)

    Liu, Wenyang; Sawant, Amit; Ruan, Dan

    2016-07-01

    The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.

  17. Dimensions and intensity of inter-professional teamwork in primary care: evidence from five international jurisdictions.

    PubMed

    Levesque, Jean-Frederic; Harris, Mark F; Scott, Cathie; Crabtree, Benjamin; Miller, William; Halma, Lisa M; Hogg, William E; Weenink, Jan-Willem; Advocat, Jenny R; Gunn, Jane; Russell, Grant

    2017-10-23

    Inter-professional teamwork in primary care settings offers potential benefits for responding to the increasing complexity of patients' needs. While it is a central element in many reforms to primary care delivery, implementing inter-professional teamwork has proven to be more challenging than anticipated. The objective of this study was to better understand the dimensions and intensity of teamwork and the developmental process involved in creating fully integrated teams. Secondary analyses of qualitative and quantitative data from completed studies conducted in Australia, Canada and USA. Case studies and matrices were used, along with face-to-face group retreats, using a Collaborative Reflexive Deliberative Approach. Four dimensions of teamwork were identified. The structural dimension relates to human resources and mechanisms implemented to create the foundations for teamwork. The operational dimension relates to the activities and programs conducted as part of the team's production of services. The relational dimension relates to the relationships and interactions occurring in the team. Finally, the functional dimension relates to definitions of roles and responsibilities aimed at coordinating the team's activities as well as to the shared vision, objectives and developmental activities aimed at ensuring the long-term cohesion of the team. There was a high degree of variation in the way the dimensions were addressed by reforms across the national contexts. The framework enables a clearer understanding of the incremental and iterative aspects that relate to higher achievement of teamwork. Future reforms of primary care need to address higher-level dimensions of teamwork to achieve its expected outcomes. © The Author 2017. Published by Oxford University Press.

  18. Constructive methods of invariant manifolds for kinetic problems

    NASA Astrophysics Data System (ADS)

    Gorban, Alexander N.; Karlin, Iliya V.; Zinovyev, Andrei Yu.

    2004-06-01

    The concept of the slow invariant manifold is recognized as the central idea underpinning a transition from micro to macro and model reduction in kinetic theories. We present the Constructive Methods of Invariant Manifolds for model reduction in physical and chemical kinetics, developed during last two decades. The physical problem of reduced description is studied in the most general form as a problem of constructing the slow invariant manifold. The invariance conditions are formulated as the differential equation for a manifold immersed in the phase space ( the invariance equation). The equation of motion for immersed manifolds is obtained ( the film extension of the dynamics). Invariant manifolds are fixed points for this equation, and slow invariant manifolds are Lyapunov stable fixed points, thus slowness is presented as stability. A collection of methods to derive analytically and to compute numerically the slow invariant manifolds is presented. Among them, iteration methods based on incomplete linearization, relaxation method and the method of invariant grids are developed. The systematic use of thermodynamics structures and of the quasi-chemical representation allow to construct approximations which are in concordance with physical restrictions. The following examples of applications are presented: nonperturbative deviation of physically consistent hydrodynamics from the Boltzmann equation and from the reversible dynamics, for Knudsen numbers Kn∼1; construction of the moment equations for nonequilibrium media and their dynamical correction (instead of extension of list of variables) to gain more accuracy in description of highly nonequilibrium flows; determination of molecules dimension (as diameters of equivalent hard spheres) from experimental viscosity data; model reduction in chemical kinetics; derivation and numerical implementation of constitutive equations for polymeric fluids; the limits of macroscopic description for polymer molecules, etc.

  19. A Fixed-point Scheme for the Numerical Construction of Magnetohydrostatic Atmospheres in Three Dimensions

    NASA Astrophysics Data System (ADS)

    Gilchrist, S. A.; Braun, D. C.; Barnes, G.

    2016-12-01

    Magnetohydrostatic models of the solar atmosphere are often based on idealized analytic solutions because the underlying equations are too difficult to solve in full generality. Numerical approaches, too, are often limited in scope and have tended to focus on the two-dimensional problem. In this article we develop a numerical method for solving the nonlinear magnetohydrostatic equations in three dimensions. Our method is a fixed-point iteration scheme that extends the method of Grad and Rubin ( Proc. 2nd Int. Conf. on Peaceful Uses of Atomic Energy 31, 190, 1958) to include a finite gravity force. We apply the method to a test case to demonstrate the method in general and our implementation in code in particular.

  20. Analysis of one dimension migration law from rainfall runoff on urban roof

    NASA Astrophysics Data System (ADS)

    Weiwei, Chen

    2017-08-01

    Research was taken on the hydrology and water quality process in the natural rain condition and water samples were collected and analyzed. The pollutant were included SS, COD and TN. Based on the mass balance principle, one dimension migration model was built for the rainfall runoff pollution in surface. The difference equation was developed according to the finite difference method, by applying the Newton iteration method for solving it. The simulated pollutant concentration process was in consistent with the measured value on model, and Nash-Sutcliffe coefficient was higher than 0.80. The model had better practicability, which provided evidence for effectively utilizing urban rainfall resource, non-point source pollution of making management technologies and measures, sponge city construction, and so on.

  1. Use of a channelized Hotelling observer to assess CT image quality and optimize dose reduction for iteratively reconstructed images.

    PubMed

    Favazza, Christopher P; Ferrero, Andrea; Yu, Lifeng; Leng, Shuai; McMillan, Kyle L; McCollough, Cynthia H

    2017-07-01

    The use of iterative reconstruction (IR) algorithms in CT generally decreases image noise and enables dose reduction. However, the amount of dose reduction possible using IR without sacrificing diagnostic performance is difficult to assess with conventional image quality metrics. Through this investigation, achievable dose reduction using a commercially available IR algorithm without loss of low contrast spatial resolution was determined with a channelized Hotelling observer (CHO) model and used to optimize a clinical abdomen/pelvis exam protocol. A phantom containing 21 low contrast disks-three different contrast levels and seven different diameters-was imaged at different dose levels. Images were created with filtered backprojection (FBP) and IR. The CHO was tasked with detecting the low contrast disks. CHO performance indicated dose could be reduced by 22% to 25% without compromising low contrast detectability (as compared to full-dose FBP images) whereas 50% or more dose reduction significantly reduced detection performance. Importantly, default settings for the scanner and protocol investigated reduced dose by upward of 75%. Subsequently, CHO-based protocol changes to the default protocol yielded images of higher quality and doses more consistent with values from a larger, dose-optimized scanner fleet. CHO assessment provided objective data to successfully optimize a clinical CT acquisition protocol.

  2. Iterative expansion microscopy.

    PubMed

    Chang, Jae-Byum; Chen, Fei; Yoon, Young-Gyu; Jung, Erica E; Babcock, Hazen; Kang, Jeong Seuk; Asano, Shoh; Suk, Ho-Jun; Pak, Nikita; Tillberg, Paul W; Wassie, Asmamaw T; Cai, Dawen; Boyden, Edward S

    2017-06-01

    We recently developed a method called expansion microscopy, in which preserved biological specimens are physically magnified by embedding them in a densely crosslinked polyelectrolyte gel, anchoring key labels or biomolecules to the gel, mechanically homogenizing the specimen, and then swelling the gel-specimen composite by ∼4.5× in linear dimension. Here we describe iterative expansion microscopy (iExM), in which a sample is expanded ∼20×. After preliminary expansion a second swellable polymer mesh is formed in the space newly opened up by the first expansion, and the sample is expanded again. iExM expands biological specimens ∼4.5 × 4.5, or ∼20×, and enables ∼25-nm-resolution imaging of cells and tissues on conventional microscopes. We used iExM to visualize synaptic proteins, as well as the detailed architecture of dendritic spines, in mouse brain circuitry.

  3. Iterative expansion microscopy

    PubMed Central

    Chang, Jae-Byum; Chen, Fei; Yoon, Young-Gyu; Jung, Erica E.; Babcock, Hazen; Kang, Jeong Seuk; Asano, Shoh; Suk, Ho-Jun; Pak, Nikita; Tillberg, Paul W.; Wassie, Asmamaw; Cai, Dawen; Boyden, Edward S.

    2017-01-01

    We recently discovered it was possible to physically magnify preserved biological specimens by embedding them in a densely crosslinked polyelectrolyte gel, anchoring key labels or biomolecules to the gel, mechanically homogenizing the specimen, and then swelling the gel-specimen composite by ~4.5x in linear dimension, a process we call expansion microscopy (ExM). Here we describe iterative expansion microscopy (iExM), in which a sample is expanded, then a second swellable polymer mesh is formed in the space newly opened up by the first expansion, and finally the sample is expanded again. iExM expands biological specimens ~4.5 × 4.5 or ~20x, and enables ~25 nm resolution imaging of cells and tissues on conventional microscopes. We used iExM to visualize synaptic proteins, as well as the detailed architecture of dendritic spines, in mouse brain circuitry. PMID:28417997

  4. Inverse source problems in elastodynamics

    NASA Astrophysics Data System (ADS)

    Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao

    2018-04-01

    We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.

  5. Intra-patient comparison of reduced-dose model-based iterative reconstruction with standard-dose adaptive statistical iterative reconstruction in the CT diagnosis and follow-up of urolithiasis.

    PubMed

    Tenant, Sean; Pang, Chun Lap; Dissanayake, Prageeth; Vardhanabhuti, Varut; Stuckey, Colin; Gutteridge, Catherine; Hyde, Christopher; Roobottom, Carl

    2017-10-01

    To evaluate the accuracy of reduced-dose CT scans reconstructed using a new generation of model-based iterative reconstruction (MBIR) in the imaging of urinary tract stone disease, compared with a standard-dose CT using 30% adaptive statistical iterative reconstruction. This single-institution prospective study recruited 125 patients presenting either with acute renal colic or for follow-up of known urinary tract stones. They underwent two immediately consecutive scans, one at standard dose settings and one at the lowest dose (highest noise index) the scanner would allow. The reduced-dose scans were reconstructed using both ASIR 30% and MBIR algorithms and reviewed independently by two radiologists. Objective and subjective image quality measures as well as diagnostic data were obtained. The reduced-dose MBIR scan was 100% concordant with the reference standard for the assessment of ureteric stones. It was extremely accurate at identifying calculi of 3 mm and above. The algorithm allowed a dose reduction of 58% without any loss of scan quality. A reduced-dose CT scan using MBIR is accurate in acute imaging for renal colic symptoms and for urolithiasis follow-up and allows a significant reduction in dose. • MBIR allows reduced CT dose with similar diagnostic accuracy • MBIR outperforms ASIR when used for the reconstruction of reduced-dose scans • MBIR can be used to accurately assess stones 3 mm and above.

  6. Energy Confinement Recovery in Low Collisionality ITER Shape Plasmas with Applied Resonant Magnetic Perturbations (RMPs)

    NASA Astrophysics Data System (ADS)

    Cui, L.; Grierson, B.; Logan, N.; Nazikian, R.

    2016-10-01

    Application of RMPs to low collisionality (ν*e < 0.4) ITER shape plasmas on DIII-D leads to a rapid reduction in stored energy due to density pumpout that is sometimes followed by a gradual recovery in the plasma stored energy. Understanding this confinement recovery is essential to optimize the confinement of RMP plasmas in present and future devices such as ITER. Transport modeling using TRANSP+TGLF indicates that the core a/LTi is stiff in these plasmas while the ion temperature gradient is much less stiff in the pedestal region. The reduction in the edge density during pumpout leads to an increase in the core ion temperature predicted by TGLF based on experimental data. This is correlated to the increase in the normalized ion heat flux. Transport stiffness in the core combined with an increase in the edge a/LTi results in an increase of the plasma stored energy, consistent with experimental observations. For plasmas where the edge density is controlled using deuterium gas puffs, the effect of the RMP on ion thermal confinement is significantly reduced. Work supported by US DOE Grant DE-FC02-04ER54698 and DE-AC02-09CH11466.

  7. Iterative reconstruction for x-ray computed tomography using prior-image induced nonlocal regularization.

    PubMed

    Zhang, Hua; Huang, Jing; Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2014-09-01

    Repeated X-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the X-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as "PWLS-PINL". Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive overrelaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection, and edge detail preservation.

  8. Iterative Reconstruction for X-Ray Computed Tomography using Prior-Image Induced Nonlocal Regularization

    PubMed Central

    Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2014-01-01

    Repeated x-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the x-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as “PWLS-PINL”. Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive over-relaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection and edge detail preservation. PMID:24235272

  9. Dimension Reduction of Multivariable Optical Emission Spectrometer Datasets for Industrial Plasma Processes

    PubMed Central

    Yang, Jie; McArdle, Conor; Daniels, Stephen

    2014-01-01

    A new data dimension-reduction method, called Internal Information Redundancy Reduction (IIRR), is proposed for application to Optical Emission Spectroscopy (OES) datasets obtained from industrial plasma processes. For example in a semiconductor manufacturing environment, real-time spectral emission data is potentially very useful for inferring information about critical process parameters such as wafer etch rates, however, the relationship between the spectral sensor data gathered over the duration of an etching process step and the target process output parameters is complex. OES sensor data has high dimensionality (fine wavelength resolution is required in spectral emission measurements in order to capture data on all chemical species involved in plasma reactions) and full spectrum samples are taken at frequent time points, so that dynamic process changes can be captured. To maximise the utility of the gathered dataset, it is essential that information redundancy is minimised, but with the important requirement that the resulting reduced dataset remains in a form that is amenable to direct interpretation of the physical process. To meet this requirement and to achieve a high reduction in dimension with little information loss, the IIRR method proposed in this paper operates directly in the original variable space, identifying peak wavelength emissions and the correlative relationships between them. A new statistic, Mean Determination Ratio (MDR), is proposed to quantify the information loss after dimension reduction and the effectiveness of IIRR is demonstrated using an actual semiconductor manufacturing dataset. As an example of the application of IIRR in process monitoring/control, we also show how etch rates can be accurately predicted from IIRR dimension-reduced spectral data. PMID:24451453

  10. Evaluation of an iterative model-based CT reconstruction algorithm by intra-patient comparison of standard and ultra-low-dose examinations.

    PubMed

    Noël, Peter B; Engels, Stephan; Köhler, Thomas; Muenzel, Daniela; Franz, Daniela; Rasper, Michael; Rummeny, Ernst J; Dobritz, Martin; Fingerle, Alexander A

    2018-01-01

    Background The explosive growth of computer tomography (CT) has led to a growing public health concern about patient and population radiation dose. A recently introduced technique for dose reduction, which can be combined with tube-current modulation, over-beam reduction, and organ-specific dose reduction, is iterative reconstruction (IR). Purpose To evaluate the quality, at different radiation dose levels, of three reconstruction algorithms for diagnostics of patients with proven liver metastases under tumor follow-up. Material and Methods A total of 40 thorax-abdomen-pelvis CT examinations acquired from 20 patients in a tumor follow-up were included. All patients were imaged using the standard-dose and a specific low-dose CT protocol. Reconstructed slices were generated by using three different reconstruction algorithms: a classical filtered back projection (FBP); a first-generation iterative noise-reduction algorithm (iDose4); and a next generation model-based IR algorithm (IMR). Results The overall detection of liver lesions tended to be higher with the IMR algorithm than with FBP or iDose4. The IMR dataset at standard dose yielded the highest overall detectability, while the low-dose FBP dataset showed the lowest detectability. For the low-dose protocols, a significantly improved detectability of the liver lesion can be reported compared to FBP or iDose 4 ( P = 0.01). The radiation dose decreased by an approximate factor of 5 between the standard-dose and the low-dose protocol. Conclusion The latest generation of IR algorithms significantly improved the diagnostic image quality and provided virtually noise-free images for ultra-low-dose CT imaging.

  11. In vitro evaluation of a new iterative reconstruction algorithm for dose reduction in coronary artery calcium scoring

    PubMed Central

    Allmendinger, Thomas; Kunz, Andreas S; Veyhl-Wichmann, Maike; Ergün, Süleyman; Bley, Thorsten A; Petritsch, Bernhard

    2017-01-01

    Background Coronary artery calcium (CAC) scoring is a widespread tool for cardiac risk assessment in asymptomatic patients and accompanying possible adverse effects, i.e. radiation exposure, should be as low as reasonably achievable. Purpose To evaluate a new iterative reconstruction (IR) algorithm for dose reduction of in vitro coronary artery calcium scoring at different tube currents. Material and Methods An anthropomorphic calcium scoring phantom was scanned in different configurations simulating slim, average-sized, and large patients. A standard calcium scoring protocol was performed on a third-generation dual-source CT at 120 kVp tube voltage. Reference tube current was 80 mAs as standard and stepwise reduced to 60, 40, 20, and 10 mAs. Images were reconstructed with weighted filtered back projection (wFBP) and a new version of an established IR kernel at different strength levels. Calcifications were quantified calculating Agatston and volume scores. Subjective image quality was visualized with scans of an ex vivo human heart. Results In general, Agatston and volume scores remained relatively stable between 80 and 40 mAs and increased at lower tube currents, particularly in the medium and large phantom. IR reduced this effect, as both Agatston and volume scores decreased with increasing levels of IR compared to wFBP (P < 0.001). Depending on selected parameters, radiation dose could be lowered by up to 86% in the large size phantom when selecting a reference tube current of 10 mAs with resulting Agatston levels close to the reference settings. Conclusion New iterative reconstruction kernels may allow for reduction in tube current for established Agatston scoring protocols and consequently for substantial reduction in radiation exposure. PMID:28607763

  12. Can use of adaptive statistical iterative reconstruction reduce radiation dose in unenhanced head CT? An analysis of qualitative and quantitative image quality

    PubMed Central

    Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T

    2016-01-01

    Background Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. Purpose To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Material and Methods Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor’s water phantom. Results There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between −3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and −7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. Conclusion There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality. PMID:27583169

  13. Can use of adaptive statistical iterative reconstruction reduce radiation dose in unenhanced head CT? An analysis of qualitative and quantitative image quality.

    PubMed

    Østerås, Bjørn Helge; Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T

    2016-08-01

    Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor's water phantom. There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between -3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and -7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality.

  14. Superconductivity modelling: Homogenization of Bean`s model in three dimensions, and the problem of transverse conductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bossavit, A.

    The authors show how to pass from the local Bean`s model, assumed to be valid as a behavior law for a homogeneous superconductor, to a model of similar form, valid on a larger space scale. The process, which can be iterated to higher and higher space scales, consists in solving for the fields e and j over a ``periodicity cell`` with periodic boundary conditions.

  15. Economic Growth and Poverty Reduction: Measurement and Policy Issues. OECD Development Centre Working Paper No. 246

    ERIC Educational Resources Information Center

    Klasen, Stephan

    2005-01-01

    The aim of this Working Paper is to broaden the debate on "pro-poor growth". An exclusive focus on the income dimension of poverty has neglected the non-income dimensions. After an examination of prominent views on the linkages between economic growth, inequality, and poverty reduction this paper discusses the proper definition and…

  16. Acceleration of image-based resolution modelling reconstruction using an expectation maximization nested algorithm.

    PubMed

    Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2013-08-07

    Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.

  17. An Iterative Information-Reduced Quadriphase-Shift-Keyed Carrier Synchronization Scheme Using Decision Feedback for Low Signal-to-Noise Ratio Applications

    NASA Technical Reports Server (NTRS)

    Simon, M.; Tkacenko, A.

    2006-01-01

    In a previous publication [1], an iterative closed-loop carrier synchronization scheme for binary phase-shift keyed (BPSK) modulation was proposed that was based on feeding back data decisions to the input of the loop, the purpose being to remove the modulation prior to carrier synchronization as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. The idea there was that, with sufficient independence between the received data and the decisions on it that are fed back (as would occur in an error-correction coding environment with sufficient decoding delay), a pure tone in the presence of noise would ultimately be produced (after sufficient iteration and low enough error probability) and thus could be tracked without any squaring loss. This article demonstrates that, with some modification, the same idea of iterative information reduction through decision feedback can be applied to quadrature phase-shift keyed (QPSK) modulation, something that was mentioned in the previous publication but never pursued.

  18. Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2011-01-01

    A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.

  19. The SCUBA-2 SRO data reduction cookbook

    NASA Astrophysics Data System (ADS)

    Chapin, Edward; Dempsey, Jessica; Jenness, Tim; Scott, Douglas; Thomas, Holly; Tilanus, Remo P. J.

    This cookbook provides a short introduction to starlink\\ facilities, especially smurf, the Sub-Millimetre User Reduction Facility, for reducing and displaying SCUBA-2 SRO data. We describe some of the data artefacts present in SCUBA-2 time series and methods we employ to mitigate them. In particular, we illustrate the various steps required to reduce the data, and the Dynamic Iterative Map-Maker, which carries out all of these steps using a single command. For information on SCUBA-2 data reduction since SRO, please SC/21.

  20. Development and benchmarking of TASSER(iter) for the iterative improvement of protein structure predictions.

    PubMed

    Lee, Seung Yup; Skolnick, Jeffrey

    2007-07-01

    To improve the accuracy of TASSER models especially in the limit where threading provided template alignments are of poor quality, we have developed the TASSER(iter) algorithm which uses the templates and contact restraints from TASSER generated models for iterative structure refinement. We apply TASSER(iter) to a large benchmark set of 2,773 nonhomologous single domain proteins that are < or = 200 in length and that cover the PDB at the level of 35% pairwise sequence identity. Overall, TASSER(iter) models have a smaller global average RMSD of 5.48 A compared to 5.81 A RMSD of the original TASSER models. Classifying the targets by the level of prediction difficulty (where Easy targets have a good template with a corresponding good threading alignment, Medium targets have a good template but a poor alignment, and Hard targets have an incorrectly identified template), TASSER(iter) (TASSER) models have an average RMSD of 4.15 A (4.35 A) for the Easy set and 9.05 A (9.52 A) for the Hard set. The largest reduction of average RMSD is for the Medium set where the TASSER(iter) models have an average global RMSD of 5.67 A compared to 6.72 A of the TASSER models. Seventy percent of the Medium set TASSER(iter) models have a smaller RMSD than the TASSER models, while 63% of the Easy and 60% of the Hard TASSER models are improved by TASSER(iter). For the foldable cases, where the targets have a RMSD to the native <6.5 A, TASSER(iter) shows obvious improvement over TASSER models: For the Medium set, it improves the success rate from 57.0 to 67.2%, followed by the Hard targets where the success rate improves from 32.0 to 34.8%, with the smallest improvement in the Easy targets from 82.6 to 84.0%. These results suggest that TASSER(iter) can provide more reliable predictions for targets of Medium difficulty, a range that had resisted improvement in the quality of protein structure predictions. 2007 Wiley-Liss, Inc.

  1. Mechanical Characterization of the Iter Mock-Up Insulation after Reactor Irradiation

    NASA Astrophysics Data System (ADS)

    Prokopec, R.; Humer, K.; Fillunger, H.; Maix, R. K.; Weber, H. W.

    2010-04-01

    The ITER mock-up project was launched in order to demonstrate the feasibility of an industrial impregnation process using the new cyanate ester/epoxy blend. The mock-up simulates the TF winding pack cross section by a stainless steel structure with the same dimensions as the TF winding pack at a length of 1 m. It consists of 7 plates simulating the double pancakes, each of them is wrapped with glass fiber/Kapton sandwich tapes. After stacking the 7 plates, additional insulation layers are wrapped to simulate the ground insulation. This paper presents the results of the mechanical quality tests on the mock-up pancake insulation. Tensile and short beam shear specimens were cut from the plates extracted from the mock-up and tested at 77 K using a servo-hydraulic material testing device. All tests were repeated after reactor irradiation to a fast neutron fluence of 1×1022 m-2 (E>0.1 MeV). In order to simulate the pulsed operation of ITER, tension-tension fatigue measurements were performed in the load controlled mode. Initial results show a high mechanical strength as expected from the high number of thin glass fiber layers, and an excellent homogeneity of the material.

  2. Computing eigenfunctions and eigenvalues of boundary-value problems with the orthogonal spectral renormalization method

    NASA Astrophysics Data System (ADS)

    Cartarius, Holger; Musslimani, Ziad H.; Schwarz, Lukas; Wunner, Günter

    2018-03-01

    The spectral renormalization method was introduced in 2005 as an effective way to compute ground states of nonlinear Schrödinger and Gross-Pitaevskii type equations. In this paper, we introduce an orthogonal spectral renormalization (OSR) method to compute ground and excited states (and their respective eigenvalues) of linear and nonlinear eigenvalue problems. The implementation of the algorithm follows four simple steps: (i) reformulate the underlying eigenvalue problem as a fixed-point equation, (ii) introduce a renormalization factor that controls the convergence properties of the iteration, (iii) perform a Gram-Schmidt orthogonalization process in order to prevent the iteration from converging to an unwanted mode, and (iv) compute the solution sought using a fixed-point iteration. The advantages of the OSR scheme over other known methods (such as Newton's and self-consistency) are (i) it allows the flexibility to choose large varieties of initial guesses without diverging, (ii) it is easy to implement especially at higher dimensions, and (iii) it can easily handle problems with complex and random potentials. The OSR method is implemented on benchmark Hermitian linear and nonlinear eigenvalue problems as well as linear and nonlinear non-Hermitian PT -symmetric models.

  3. The ALI-ARMS Code for Modeling Atmospheric non-LTE Molecular Band Emissions: Current Status and Applications

    NASA Technical Reports Server (NTRS)

    Kutepov, A. A.; Feofilov, A. G.; Manuilova, R. O.; Yankovsky, V. A.; Rezac, L.; Pesnell, W. D.; Goldberg, R. A.

    2008-01-01

    The Accelerated Lambda Iteration (ALI) technique was developed in stellar astrophysics at the beginning of 1990s for solving the non-LTE radiative transfer problem in atomic lines and multiplets in stellar atmospheres. It was later successfully applied to modeling the non-LTE emissions and radiative cooling/heating in the vibrational-rotational bands of molecules in planetary atmospheres. Similar to the standard lambda iterations ALI operates with the matrices of minimal dimension. However, it provides higher convergence rate and stability due to removing from the iterating process the photons trapped in the optically thick line cores. In the current ALI-ARMS (ALI for Atmospheric Radiation and Molecular Spectra) code version additional acceleration of calculations is provided by utilizing the opacity distribution function (ODF) approach and "decoupling". The former allows replacing the band branches by single lines of special shape, whereas the latter treats non-linearity caused by strong near-resonant vibration-vibrational level coupling without additional linearizing the statistical equilibrium equations. Latest code application for the non-LTE diagnostics of the molecular band emissions of Earth's and Martian atmospheres as well as for the non-LTE IR cooling/heating calculations are discussed.

  4. The ITER ICRF Antenna Design with TOPICA

    NASA Astrophysics Data System (ADS)

    Milanesio, Daniele; Maggiora, Riccardo; Meneghini, Orso; Vecchi, Giuseppe

    2007-11-01

    TOPICA (Torino Polytechnic Ion Cyclotron Antenna) code is an innovative tool for the 3D/1D simulation of Ion Cyclotron Radio Frequency (ICRF), i.e. accounting for antennas in a realistic 3D geometry and with an accurate 1D plasma model [1]. The TOPICA code has been deeply parallelized and has been already proved to be a reliable tool for antennas design and performance prediction. A detailed analysis of the 24 straps ITER ICRF antenna geometry has been carried out, underlining the strong dependence and asymmetries of the antenna input parameters due to the ITER plasma response. We optimized the antenna array geometry dimensions to maximize loading, lower mutual couplings and mitigate sheath effects. The calculated antenna input impedance matrices are TOPICA results of a paramount importance for the tuning and matching system design. Electric field distributions have been also calculated and they are used as the main input for the power flux estimation tool. The designed optimized antenna is capable of coupling 20 MW of power to plasma in the 40 -- 55 MHz frequency range with a maximum voltage of 45 kV in the feeding coaxial cables. [1] V. Lancellotti et al., Nuclear Fusion, 46 (2006) S476-S499

  5. An Iterative Local Updating Ensemble Smoother for Estimation and Uncertainty Assessment of Hydrologic Model Parameters With Multimodal Distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Lin, Guang; Li, Weixuan; Wu, Laosheng; Zeng, Lingzao

    2018-03-01

    Ensemble smoother (ES) has been widely used in inverse modeling of hydrologic systems. However, for problems where the distribution of model parameters is multimodal, using ES directly would be problematic. One popular solution is to use a clustering algorithm to identify each mode and update the clusters with ES separately. However, this strategy may not be very efficient when the dimension of parameter space is high or the number of modes is large. Alternatively, we propose in this paper a very simple and efficient algorithm, i.e., the iterative local updating ensemble smoother (ILUES), to explore multimodal distributions of model parameters in nonlinear hydrologic systems. The ILUES algorithm works by updating local ensembles of each sample with ES to explore possible multimodal distributions. To achieve satisfactory data matches in nonlinear problems, we adopt an iterative form of ES to assimilate the measurements multiple times. Numerical cases involving nonlinearity and multimodality are tested to illustrate the performance of the proposed method. It is shown that overall the ILUES algorithm can well quantify the parametric uncertainties of complex hydrologic models, no matter whether the multimodal distribution exists.

  6. Guidelines for internal optics optimization of the ITER EC H and CD upper launcher

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moro, A.; Bruschi, A.; Figini, L.

    2014-02-12

    The importance of localized injection of Electron Cyclotron waves to control Magneto-HydroDynamic instability is well assessed in tokamak physics and the set of four Electron Cyclotron (EC) Upper Launchers (UL) in ITER is mainly designed for this purpose. Each of the 4 ULs uses quasi-optical mirrors (shaping and planes, fixed and steerable) to redirect and focus 8 beams (in two rows, with power close to 1 MW per beam coming from the EC transmission lines) in the plasma region where the instability appears. Small beam dimensions and maximum beam superposition guarantee the necessary localization of the driven current. To achievemore » the goal of MHD stabilization with minimum EC power to preserve the energy confinement in the outer half of the plasma cross section, optimization of the quasi-optical design is required and a guideline of a strategy is presented. As a result of this process and following the guidelines indicated, modifications of the design (new mirrors positions, rotation axes and/or focal properties) will be proposed for the next step of an iterative process, including the mandatory compatibility check with the mechanical constraints.« less

  7. A Novel Pairwise Comparison-Based Method to Determine Radiation Dose Reduction Potentials of Iterative Reconstruction Algorithms, Exemplified Through Circle of Willis Computed Tomography Angiography.

    PubMed

    Ellmann, Stephan; Kammerer, Ferdinand; Brand, Michael; Allmendinger, Thomas; May, Matthias S; Uder, Michael; Lell, Michael M; Kramer, Manuel

    2016-05-01

    The aim of this study was to determine the dose reduction potential of iterative reconstruction (IR) algorithms in computed tomography angiography (CTA) of the circle of Willis using a novel method of evaluating the quality of radiation dose-reduced images. This study relied on ReconCT, a proprietary reconstruction software that allows simulating CT scans acquired with reduced radiation dose based on the raw data of true scans. To evaluate the performance of ReconCT in this regard, a phantom study was performed to compare the image noise of true and simulated scans within simulated vessels of a head phantom. That followed, 10 patients scheduled for CTA of the circle of Willis were scanned according to our institute's standard protocol (100 kV, 145 reference mAs). Subsequently, CTA images of these patients were reconstructed as either a full-dose weighted filtered back projection or with radiation dose reductions down to 10% of the full-dose level and Sinogram-Affirmed Iterative Reconstruction (SAFIRE) with either strength 3 or 5. Images were marked with arrows pointing on vessels of different sizes, and image pairs were presented to observers. Five readers assessed image quality with 2-alternative forced choice comparisons. In the phantom study, no significant differences were observed between the noise levels of simulated and true scans in filtered back projection, SAFIRE 3, and SAFIRE 5 reconstructions.The dose reduction potential for patient scans showed a strong dependence on IR strength as well as on the size of the vessel of interest. Thus, the potential radiation dose reductions ranged from 84.4% for the evaluation of great vessels reconstructed with SAFIRE 5 to 40.9% for the evaluation of small vessels reconstructed with SAFIRE 3. This study provides a novel image quality evaluation method based on 2-alternative forced choice comparisons. In CTA of the circle of Willis, higher IR strengths and greater vessel sizes allowed higher degrees of radiation dose reduction.

  8. An Algorithm for the Mixed Transportation Network Design Problem

    PubMed Central

    Liu, Xinyu; Chen, Qun

    2016-01-01

    This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803

  9. Analytical Formulation for Sizing and Estimating the Dimensions and Weight of Wind Turbine Hub and Drivetrain Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Parsons, T.; King, R.

    This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less

  10. Multidimensional FEM-FCT schemes for arbitrary time stepping

    NASA Astrophysics Data System (ADS)

    Kuzmin, D.; Möller, M.; Turek, S.

    2003-05-01

    The flux-corrected-transport paradigm is generalized to finite-element schemes based on arbitrary time stepping. A conservative flux decomposition procedure is proposed for both convective and diffusive terms. Mathematical properties of positivity-preserving schemes are reviewed. A nonoscillatory low-order method is constructed by elimination of negative off-diagonal entries of the discrete transport operator. The linearization of source terms and extension to hyperbolic systems are discussed. Zalesak's multidimensional limiter is employed to switch between linear discretizations of high and low order. A rigorous proof of positivity is provided. The treatment of non-linearities and iterative solution of linear systems are addressed. The performance of the new algorithm is illustrated by numerical examples for the shock tube problem in one dimension and scalar transport equations in two dimensions.

  11. Projection methods for the numerical solution of Markov chain models

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  12. Preconditioned Mixed Spectral Element Methods for Elasticity and Stokes Problems

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Preconditioned iterative methods for the indefinite systems obtained by discretizing the linear elasticity and Stokes problems with mixed spectral elements in three dimensions are introduced and analyzed. The resulting stiffness matrices have the structure of saddle point problems with a penalty term, which is associated with the Poisson ratio for elasticity problems or with stabilization techniques for Stokes problems. The main results of this paper show that the convergence rate of the resulting algorithms is independent of the penalty parameter, the number of spectral elements Nu and mildly dependent on the spectral degree eta via the inf-sup constant. The preconditioners proposed for the whole indefinite system are block-diagonal and block-triangular. Numerical experiments presented in the final section show that these algorithms are a practical and efficient strategy for the iterative solution of the indefinite problems arising from mixed spectral element discretizations of elliptic systems.

  13. The Laboratory Course Assessment Survey: A Tool to Measure Three Dimensions of Research-Course Design

    PubMed Central

    Corwin, Lisa A.; Runyon, Christopher; Robinson, Aspen; Dolan, Erin L.

    2015-01-01

    Course-based undergraduate research experiences (CUREs) are increasingly being offered as scalable ways to involve undergraduates in research. Yet few if any design features that make CUREs effective have been identified. We developed a 17-item survey instrument, the Laboratory Course Assessment Survey (LCAS), that measures students’ perceptions of three design features of biology lab courses: 1) collaboration, 2) discovery and relevance, and 3) iteration. We assessed the psychometric properties of the LCAS using established methods for instrument design and validation. We also assessed the ability of the LCAS to differentiate between CUREs and traditional laboratory courses, and found that the discovery and relevance and iteration scales differentiated between these groups. Our results indicate that the LCAS is suited for characterizing and comparing undergraduate biology lab courses and should be useful for determining the relative importance of the three design features for achieving student outcomes. PMID:26466990

  14. Guided particle swarm optimization method to solve general nonlinear optimization problems

    NASA Astrophysics Data System (ADS)

    Abdelhalim, Alyaa; Nakata, Kazuhide; El-Alem, Mahmoud; Eltawil, Amr

    2018-04-01

    The development of hybrid algorithms is becoming an important topic in the global optimization research area. This article proposes a new technique in hybridizing the particle swarm optimization (PSO) algorithm and the Nelder-Mead (NM) simplex search algorithm to solve general nonlinear unconstrained optimization problems. Unlike traditional hybrid methods, the proposed method hybridizes the NM algorithm inside the PSO to improve the velocities and positions of the particles iteratively. The new hybridization considers the PSO algorithm and NM algorithm as one heuristic, not in a sequential or hierarchical manner. The NM algorithm is applied to improve the initial random solution of the PSO algorithm and iteratively in every step to improve the overall performance of the method. The performance of the proposed method was tested over 20 optimization test functions with varying dimensions. Comprehensive comparisons with other methods in the literature indicate that the proposed solution method is promising and competitive.

  15. Some Remarks on GMRES for Transport Theory

    NASA Technical Reports Server (NTRS)

    Patton, Bruce W.; Holloway, James Paul

    2003-01-01

    We review some work on the application of GMRES to the solution of the discrete ordinates transport equation in one-dimension. We note that GMRES can be applied directly to the angular flux vector, or it can be applied to only a vector of flux moments as needed to compute the scattering operator of the transport equation. In the former case we illustrate both the delights and defects of ILU right-preconditioners for problems with anisotropic scatter and for problems with upscatter. When working with flux moments we note that GMRES can be used as an accelerator for any existing transport code whose solver is based on a stationary fixed-point iteration, including transport sweeps and DSA transport sweeps. We also provide some numerical illustrations of this idea. We finally show how space can be traded for speed by taking multiple transport sweeps per GMRES iteration. Key Words: transport equation, GMRES, Krylov subspace

  16. Fast secant methods for the iterative solution of large nonsymmetric linear systems

    NASA Technical Reports Server (NTRS)

    Deuflhard, Peter; Freund, Roland; Walter, Artur

    1990-01-01

    A family of secant methods based on general rank-1 updates was revisited in view of the construction of iterative solvers for large non-Hermitian linear systems. As it turns out, both Broyden's good and bad update techniques play a special role, but should be associated with two different line search principles. For Broyden's bad update technique, a minimum residual principle is natural, thus making it theoretically comparable with a series of well known algorithms like GMRES. Broyden's good update technique, however, is shown to be naturally linked with a minimum next correction principle, which asymptotically mimics a minimum error principle. The two minimization principles differ significantly for sufficiently large system dimension. Numerical experiments on discretized partial differential equations of convection diffusion type in 2-D with integral layers give a first impression of the possible power of the derived good Broyden variant.

  17. A Fourier dimensionality reduction model for big data interferometric imaging

    NASA Astrophysics Data System (ADS)

    Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves

    2017-06-01

    Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the proposed reduction method is available on GitHub.

  18. Reduced-order modeling with sparse polynomial chaos expansion and dimension reduction for evaluating the impact of CO2 and brine leakage on groundwater

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Zheng, L.; Pau, G. S. H.

    2016-12-01

    A careful assessment of the risk associated with geologic CO2 storage is critical to the deployment of large-scale storage projects. While numerical modeling is an indispensable tool for risk assessment, there has been increasing need in considering and addressing uncertainties in the numerical models. However, uncertainty analyses have been significantly hindered by the computational complexity of the model. As a remedy, reduced-order models (ROM), which serve as computationally efficient surrogates for high-fidelity models (HFM), have been employed. The ROM is constructed at the expense of an initial set of HFM simulations, and afterwards can be relied upon to predict the model output values at minimal cost. The ROM presented here is part of National Risk Assessment Program (NRAP) and intends to predict the water quality change in groundwater in response to hypothetical CO2 and brine leakage. The HFM based on which the ROM is derived is a multiphase flow and reactive transport model, with 3-D heterogeneous flow field and complex chemical reactions including aqueous complexation, mineral dissolution/precipitation, adsorption/desorption via surface complexation and cation exchange. Reduced-order modeling techniques based on polynomial basis expansion, such as polynomial chaos expansion (PCE), are widely used in the literature. However, the accuracy of such ROMs can be affected by the sparse structure of the coefficients of the expansion. Failing to identify vanishing polynomial coefficients introduces unnecessary sampling errors, the accumulation of which deteriorates the accuracy of the ROMs. To address this issue, we treat the PCE as a sparse Bayesian learning (SBL) problem, and the sparsity is obtained by detecting and including only the non-zero PCE coefficients one at a time by iteratively selecting the most contributing coefficients. The computational complexity due to predicting the entire 3-D concentration fields is further mitigated by a dimension reduction procedure-proper orthogonal decomposition (POD). Our numerical results show that utilizing the sparse structure and POD significantly enhances the accuracy and efficiency of the ROMs, laying the basis for further analyses that necessitate a large number of model simulations.

  19. Predictive spectroscopy and chemical imaging based on novel optical systems

    NASA Astrophysics Data System (ADS)

    Nelson, Matthew Paul

    1998-10-01

    This thesis describes two futuristic optical systems designed to surpass contemporary spectroscopic methods for predictive spectroscopy and chemical imaging. These systems are advantageous to current techniques in a number of ways including lower cost, enhanced portability, shorter analysis time, and improved S/N. First, a novel optical approach to predicting chemical and physical properties based on principal component analysis (PCA) is proposed and evaluated. A regression vector produced by PCA is designed into the structure of a set of paired optical filters. Light passing through the paired filters produces an analog detector signal directly proportional to the chemical/physical property for which the regression vector was designed. Second, a novel optical system is described which takes a single-shot approach to chemical imaging with high spectroscopic resolution using a dimension-reduction fiber-optic array. Images are focused onto a two- dimensional matrix of optical fibers which are drawn into a linear distal array with specific ordering. The distal end is imaged with a spectrograph equipped with an ICCD camera for spectral analysis. Software is used to extract the spatial/spectral information contained in the ICCD images and deconvolute them into wave length-specific reconstructed images or position-specific spectra which span a multi-wavelength space. This thesis includes a description of the fabrication of two dimension-reduction arrays as well as an evaluation of the system for spatial and spectral resolution, throughput, image brightness, resolving power, depth of focus, and channel cross-talk. PCA is performed on the images by treating rows of the ICCD images as spectra and plotting the scores of each PC as a function of reconstruction position. In addition, iterative target transformation factor analysis (ITTFA) is performed on the spectroscopic images to generate ``true'' chemical maps of samples. Univariate zero-order images, univariate first-order spectroscopic images, bivariate first-order spectroscopic images, and multivariate first-order spectroscopic images of the temporal development of laser-induced plumes are presented and interpreted. Reconstructed chemical images generated using bivariate and trivariate wavelength techniques, bimodal and trimodal PCA methods, and bimodal and trimodal ITTFA approaches are also included.

  20. Adaptive artificial neural network for autonomous robot control

    NASA Technical Reports Server (NTRS)

    Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.

    1992-01-01

    The topics are presented in viewgraph form and include: neural network controller for robot arm positioning with visual feedback; initial training of the arm; automatic recovery from cumulative fault scenarios; and error reduction by iterative fine movements.

  1. A Laplacian based image filtering using switching noise detector.

    PubMed

    Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar

    2015-01-01

    This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.

  2. Improved spatial resolution and lower-dose pediatric CT imaging: a feasibility study to evaluate narrowing the X-ray photon energy spectrum.

    PubMed

    Benz, Mark G; Benz, Matthew W; Birnbaum, Steven B; Chason, Eric; Sheldon, Brian W; McGuire, Dale

    2014-08-01

    This feasibility study has shown that improved spatial resolution and reduced radiation dose can be achieved in pediatric CT by narrowing the X-ray photon energy spectrum. This is done by placing a hafnium filter between the X-ray generator and a pediatric abdominal phantom. A CT system manufactured in 1999 that was in the process of being remanufactured was used as the platform for this study. This system had the advantage of easy access to the X-ray generator for modifications to change the X-ray photon energy spectrum; it also had the disadvantage of not employing the latest post-imaging noise reduction iterative reconstruction technology. Because we observed improvements after changing the X-ray photon energy spectrum, we recommend a future study combining this change with an optimized iterative reconstruction noise reduction technique.

  3. Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions

    DOE PAGES

    Burke, Timothy P.; Kiedrowski, Brian C.

    2017-12-11

    Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less

  4. Standard and reduced radiation dose liver CT images: adaptive statistical iterative reconstruction versus model-based iterative reconstruction-comparison of findings and image quality.

    PubMed

    Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M

    2014-12-01

    To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.

  5. Multi-machine analysis of termination scenarios with comparison to simulations of controlled shutdown of ITER discharges

    DOE PAGES

    de Vries, Peter C.; Luce, Timothy C.; Bae, Young-soon; ...

    2017-11-22

    To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in fGW limits the duration ofmore » the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q95~3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in βp at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. Here, the results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.« less

  6. Multi-machine analysis of termination scenarios with comparison to simulations of controlled shutdown of ITER discharges

    NASA Astrophysics Data System (ADS)

    de Vries, P. C.; Luce, T. C.; Bae, Y. S.; Gerhardt, S.; Gong, X.; Gribov, Y.; Humphreys, D.; Kavin, A.; Khayrutdinov, R. R.; Kessel, C.; Kim, S. H.; Loarte, A.; Lukash, V. E.; de la Luna, E.; Nunes, I.; Poli, F.; Qian, J.; Reinke, M.; Sauter, O.; Sips, A. C. C.; Snipes, J. A.; Stober, J.; Treutterer, W.; Teplukhina, A. A.; Voitsekhovitch, I.; Woo, M. H.; Wolfe, S.; Zabeo, L.; the Alcator C-MOD Team; the ASDEX Upgrade Team; the DIII-D Team; the EAST Team; contributors, JET; the KSTAR Team; the NSTX-U Team; the TCV Team; IOS members, ITPA; experts

    2018-02-01

    To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in f GW limits the duration of the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q 95 ~ 3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in β p at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. The results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.

  7. Assessment of Primary Site Response in Children With High-Risk Neuroblastoma: An International Multicenter Study

    PubMed Central

    McHugh, Kieran; Naranjo, Arlene; Van Ryn, Collin; Kirby, Chaim; Brock, Penelope; Lyons, Karen A.; States, Lisa J.; Rojas, Yesenia; Miller, Alexandra; Volchenboum, Sam L.; Simon, Thorsten; Krug, Barbara; Sarnacki, Sabine; Valteau-Couanet, Dominique; von Schweinitz, Dietrich; Kammer, Birgit; Granata, Claudio; Pio, Luca; Park, Julie R.; Nuchtern, Jed

    2016-01-01

    Purpose The International Neuroblastoma Response Criteria (INRC) require serial measurements of primary tumors in three dimensions, whereas the Response Evaluation Criteria in Solid Tumors (RECIST) require measurement in one dimension. This study was conducted to identify the preferred method of primary tumor response assessment for use in revised INRC. Patients and Methods Patients younger than 20 years with high-risk neuroblastoma were eligible if they were diagnosed between 2000 and 2012 and if three primary tumor measurements (antero-posterior, width, cranio-caudal) were recorded at least twice before resection. Responses were defined as ≥ 30% reduction in longest dimension as per RECIST, ≥ 50% reduction in volume as per INRC, or ≥ 65% reduction in volume. Results Three-year event-free survival for all patients (N = 229) was 44% and overall survival was 58%. The sensitivity of both volume response measures (ability to detect responses in patients who survived) exceeded the sensitivity of the single dimension measure, but the specificity of all response measures (ability to identify lack of response in patients who later died) was low. In multivariable analyses, none of the response measures studied was predictive of outcome, and none was predictive of the extent of resection. Conclusion None of the methods of primary tumor response assessment was predictive of outcome. Measurement of three dimensions followed by calculation of resultant volume is more complex than measurement of a single dimension. Primary tumor response in children with high-risk neuroblastoma should therefore be evaluated in accordance with RECIST criteria, using the single longest dimension. PMID:26755515

  8. Boosted Kaluza-Klein magnetic monopole

    NASA Astrophysics Data System (ADS)

    Hashemi, S. Sedigheh; Riazi, Nematollah

    2018-06-01

    We consider a Kaluza-Klein vacuum solution which is closely related to the Gross-Perry-Sorkin (GPS) magnetic monopole. The solution can be obtained from the Euclidean Taub-NUT solution with an extra compact fifth spatial dimension within the formalism of Kaluza-Klein reduction. We study its physical properties as appearing in (3 + 1) spacetime dimensions, which turns out to be a static magnetic monopole. We then boost the GPS magnetic monopole along the extra dimension, and perform the Kaluza-Klein reduction. The resulting four-dimensional spacetime is a rotating stationary system, with both electric and magnetic fields. In fact, after the boost the magnetic monopole turns into a string connected to a dyon.

  9. A simple method for low-contrast detectability, image quality and dose optimisation with CT iterative reconstruction algorithms and model observers.

    PubMed

    Bellesi, Luca; Wyttenbach, Rolf; Gaudino, Diego; Colleoni, Paolo; Pupillo, Francesco; Carrara, Mauro; Braghetti, Antonio; Puligheddu, Carla; Presilla, Stefano

    2017-01-01

    The aim of this work was to evaluate detection of low-contrast objects and image quality in computed tomography (CT) phantom images acquired at different tube loadings (i.e. mAs) and reconstructed with different algorithms, in order to find appropriate settings to reduce the dose to the patient without any image detriment. Images of supraslice low-contrast objects of a CT phantom were acquired using different mAs values. Images were reconstructed using filtered back projection (FBP), hybrid and iterative model-based methods. Image quality parameters were evaluated in terms of modulation transfer function; noise, and uniformity using two software resources. For the definition of low-contrast detectability, studies based on both human (i.e. four-alternative forced-choice test) and model observers were performed across the various images. Compared to FBP, image quality parameters were improved by using iterative reconstruction (IR) algorithms. In particular, IR model-based methods provided a 60% noise reduction and a 70% dose reduction, preserving image quality and low-contrast detectability for human radiological evaluation. According to the model observer, the diameters of the minimum detectable detail were around 2 mm (up to 100 mAs). Below 100 mAs, the model observer was unable to provide a result. IR methods improve CT protocol quality, providing a potential dose reduction while maintaining a good image detectability. Model observer can in principle be useful to assist human performance in CT low-contrast detection tasks and in dose optimisation.

  10. Decentralized Control of Sound Radiation from an Aircraft-Style Panel Using Iterative Loop Recovery

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.

    2008-01-01

    A decentralized LQG-based control strategy is designed to reduce low-frequency sound transmission through periodically stiffened panels. While modern control strategies have been used to reduce sound radiation from relatively simple structural acoustic systems, significant implementation issues have to be addressed before these control strategies can be extended to large systems such as the fuselage of an aircraft. For instance, centralized approaches typically require a high level of connectivity and are computationally intensive, while decentralized strategies face stability problems caused by the unmodeled interaction between neighboring control units. Since accurate uncertainty bounds are not known a priori, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is validated using real-time control experiments performed on a built-up aluminum test structure representative of the fuselage of an aircraft. Experiments demonstrate that the iterative approach is capable of achieving 12 dB peak reductions and a 3.6 dB integrated reduction in radiated sound power from the stiffened panel.

  11. Active Subspace Methods for Data-Intensive Inverse Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qiqi

    2017-04-27

    The project has developed theory and computational tools to exploit active subspaces to reduce the dimension in statistical calibration problems. This dimension reduction enables MCMC methods to calibrate otherwise intractable models. The same theoretical and computational tools can also reduce the measurement dimension for calibration problems that use large stores of data.

  12. Dimensional reduction as a method to obtain dual theories for massive spin two in arbitrary dimensions

    NASA Astrophysics Data System (ADS)

    Khoudeir, A.; Montemayor, R.; Urrutia, Luis F.

    2008-09-01

    Using the parent Lagrangian method together with a dimensional reduction from D to (D-1) dimensions, we construct dual theories for massive spin two fields in arbitrary dimensions in terms of a mixed symmetry tensor TA[A1A2…AD-2]. Our starting point is the well-studied massless parent action in dimension D. The resulting massive Stueckelberg-like parent actions in (D-1) dimensions inherit all the gauge symmetries of the original massless action and can be gauge fixed in two alternative ways, yielding the possibility of having a parent action with either a symmetric or a nonsymmetric Fierz-Pauli field eAB. Even though the dual sector in terms of the standard spin two field includes only the symmetrical part e{AB} in both cases, these two possibilities yield different results in terms of the alternative dual field TA[A1A2…AD-2]. In particular, the nonsymmetric case reproduces the Freund-Curtright action as the dual to the massive spin two field action in four dimensions.

  13. An adaptive band selection method for dimension reduction of hyper-spectral remote sensing image

    NASA Astrophysics Data System (ADS)

    Yu, Zhijie; Yu, Hui; Wang, Chen-sheng

    2014-11-01

    Hyper-spectral remote sensing data can be acquired by imaging the same area with multiple wavelengths, and it normally consists of hundreds of band-images. Hyper-spectral images can not only provide spatial information but also high resolution spectral information, and it has been widely used in environment monitoring, mineral investigation and military reconnaissance. However, because of the corresponding large data volume, it is very difficult to transmit and store Hyper-spectral images. Hyper-spectral image dimensional reduction technique is desired to resolve this problem. Because of the High relation and high redundancy of the hyper-spectral bands, it is very feasible that applying the dimensional reduction method to compress the data volume. This paper proposed a novel band selection-based dimension reduction method which can adaptively select the bands which contain more information and details. The proposed method is based on the principal component analysis (PCA), and then computes the index corresponding to every band. The indexes obtained are then ranked in order of magnitude from large to small. Based on the threshold, system can adaptively and reasonably select the bands. The proposed method can overcome the shortcomings induced by transform-based dimension reduction method and prevent the original spectral information from being lost. The performance of the proposed method has been validated by implementing several experiments. The experimental results show that the proposed algorithm can reduce the dimensions of hyper-spectral image with little information loss by adaptively selecting the band images.

  14. Three-dimensional local grid refinement for block-centered finite-difference groundwater models using iteratively coupled shared nodes: A new method of interpolation and analysis of errors

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2004-01-01

    This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.

  15. Quantitative Analysis of the Effect of Iterative Reconstruction Using a Phantom: Determining the Appropriate Blending Percentage

    PubMed Central

    Kim, Hyun Gi; Lee, Young Han; Choi, Jin-Young; Park, Mi-Suk; Kim, Myeong-Jin; Kim, Ki Whang

    2015-01-01

    Purpose To investigate the optimal blending percentage of adaptive statistical iterative reconstruction (ASIR) in a reduced radiation dose while preserving a degree of image quality and texture that is similar to that of standard-dose computed tomography (CT). Materials and Methods The CT performance phantom was scanned with standard and dose reduction protocols including reduced mAs or kVp. Image quality parameters including noise, spatial, and low-contrast resolution, as well as image texture, were quantitatively evaluated after applying various blending percentages of ASIR. The optimal blending percentage of ASIR that preserved image quality and texture compared to standard dose CT was investigated in each radiation dose reduction protocol. Results As the percentage of ASIR increased, noise and spatial-resolution decreased, whereas low-contrast resolution increased. In the texture analysis, an increasing percentage of ASIR resulted in an increase of angular second moment, inverse difference moment, and correlation and in a decrease of contrast and entropy. The 20% and 40% dose reduction protocols with 20% and 40% ASIR blending, respectively, resulted in an optimal quality of images with preservation of the image texture. Conclusion Blending the 40% ASIR to the 40% reduced tube-current product can maximize radiation dose reduction and preserve adequate image quality and texture. PMID:25510772

  16. Periradicular Infiltration of the Cervical Spine: How New CT Scanner Techniques and Protocol Modifications Contribute to the Achievement of Low-Dose Interventions.

    PubMed

    Elsholtz, Fabian Henry Jürgen; Kamp, Julia Evi-Katrin; Vahldiek, Janis Lucas; Hamm, Bernd; Niehues, Stefan Markus

    2018-06-18

     CT-guided periradicular infiltration of the cervical spine is an effective symptomatic treatment in patients with radiculopathy-associated pain syndromes. This study evaluates the robustness and safety of a low-dose protocol on a CT scanner with iterative reconstruction software.  A total of 183 patients who underwent periradicular infiltration therapy of the cervical spine were included in this study. 82 interventions were performed on a new CT scanner with a new intervention protocol using an iterative reconstruction algorithm. Spot scanning was implemented for planning and a basic low-dose setup of 80 kVp and 5 mAs was established during intermittent fluoroscopy. The comparison group included 101 prior interventions on a scanner without iterative reconstruction. The dose-length product (DLP), number of acquisitions, pain reduction on a numeric analog scale, and protocol changes to achieve a safe intervention were recorded.  The median DLP for the whole intervention was 24.3 mGy*cm in the comparison group and 1.8 mGy*cm in the study group. The median pain reduction was -3 in the study group and -2 in the comparison group. A 5 mAs increase in the tube current-time product was required in 5 patients of the study group.  Implementation of a new scanner and intervention protocol resulted in a 92.6 % dose reduction without a compromise in safety and pain relief. The dose needed here is more than 75 % lower than doses used for similar interventions in published studies. An increase of the tube current-time product was needed in only 6 % of interventions.   · The presented ultra-low-dose protocol allows for a significant dose reduction without compromising outcome.. · The protocol includes spot scanning for planning purposes and a basic setup of 80 kVp and 5 mAs.. · The iterative reconstruction algorithm is activated during fluoroscopy.. · Elsholtz FH, Kamp JE, Vahldiek JL et al. Periradicular Infiltration of the Cervical Spine: How New CT Scanner Techniques and Protocol Modifications Contribute to the Achievement of Low-Dose Interventions. Fortschr Röntgenstr 2018; DOI: 10.1055/a-0632-3930. © Georg Thieme Verlag KG Stuttgart · New York.

  17. Conservative and bounded volume-of-fluid advection on unstructured grids

    NASA Astrophysics Data System (ADS)

    Ivey, Christopher B.; Moin, Parviz

    2017-12-01

    This paper presents a novel Eulerian-Lagrangian piecewise-linear interface calculation (PLIC) volume-of-fluid (VOF) advection method, which is three-dimensional, unsplit, and discretely conservative and bounded. The approach is developed with reference to a collocated node-based finite-volume two-phase flow solver that utilizes the median-dual mesh constructed from non-convex polyhedra. The proposed advection algorithm satisfies conservation and boundedness of the liquid volume fraction irrespective of the underlying flux polyhedron geometry, which differs from contemporary unsplit VOF schemes that prescribe topologically complicated flux polyhedron geometries in efforts to satisfy conservation. Instead of prescribing complicated flux-polyhedron geometries, which are prone to topological failures, our VOF advection scheme, the non-intersecting flux polyhedron advection (NIFPA) method, builds the flux polyhedron iteratively such that its intersection with neighboring flux polyhedra, and any other unavailable volume, is empty and its total volume matches the calculated flux volume. During each iteration, a candidate nominal flux polyhedron is extruded using an iteration dependent scalar. The candidate is subsequently intersected with the volume guaranteed available to it at the time of the flux calculation to generate the candidate flux polyhedron. The difference in the volume of the candidate flux polyhedron and the actual flux volume is used to calculate extrusion during the next iteration. The choice in nominal flux polyhedron impacts the cost and accuracy of the scheme; however, it does not impact the methods underlying conservation and boundedness. As such, various robust nominal flux polyhedron are proposed and tested using canonical periodic kinematic test cases: Zalesak's disk and two- and three-dimensional deformation. The tests are conducted on the median duals of a quadrilateral and triangular primal mesh, in two-dimensions, and on the median duals of a hexahedral, wedge and tetrahedral primal mesh, in three-dimensions. Comparisons are made with the adaptation of a conventional unsplit VOF advection scheme to our collocated node-based flow solver. Depending on the choice in the nominal flux polyhedron, the NIFPA scheme presented accuracies ranging from zeroth to second order and calculation times that differed by orders of magnitude. For the nominal flux polyhedra which demonstrate second-order accuracy on all tests and meshes, the NIFPA method's cost was comparable to the traditional topologically complex second-order accurate VOF advection scheme.

  18. Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics

    NASA Astrophysics Data System (ADS)

    Wehmeyer, Christoph; Noé, Frank

    2018-06-01

    Inspired by the success of deep learning techniques in the physical and chemical sciences, we apply a modification of an autoencoder type deep neural network to the task of dimension reduction of molecular dynamics data. We can show that our time-lagged autoencoder reliably finds low-dimensional embeddings for high-dimensional feature spaces which capture the slow dynamics of the underlying stochastic processes—beyond the capabilities of linear dimension reduction techniques.

  19. Pollution Reduction Technology Program for Small Jet Aircraft Engines, Phase 2

    NASA Technical Reports Server (NTRS)

    Bruce, T. W.; Davis, F. G.; Kuhn, T. E.; Mongia, H. C.

    1978-01-01

    A series of iterative combustor pressure rig tests were conducted on two combustor concepts applied to the AiResearch TFE731-2 turbofan engine combustion system for the purpose of optimizing combustor performance and operating characteristics consistant with low emissions. The two concepts were an axial air-assisted airblast fuel injection configuration with variable-geometry air swirlers and a staged premix/prevaporization configuration. The iterative rig testing and modification sequence on both concepts was intended to provide operational compatibility with the engine and determine one concept for further evaluation in a TFE731-2 engine.

  20. Optimized Deconvolution for Maximum Axial Resolution in Three-Dimensional Aberration-Corrected Scanning Transmission Electron Microscopy

    PubMed Central

    Ramachandra, Ranjan; de Jonge, Niels

    2012-01-01

    Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090

  1. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  2. Quantitative evaluation of ASiR image quality: an adaptive statistical iterative reconstruction technique

    NASA Astrophysics Data System (ADS)

    Van de Casteele, Elke; Parizel, Paul; Sijbers, Jan

    2012-03-01

    Adaptive statistical iterative reconstruction (ASiR) is a new reconstruction algorithm used in the field of medical X-ray imaging. This new reconstruction method combines the idealized system representation, as we know it from the standard Filtered Back Projection (FBP) algorithm, and the strength of iterative reconstruction by including a noise model in the reconstruction scheme. It studies how noise propagates through the reconstruction steps, feeds this model back into the loop and iteratively reduces noise in the reconstructed image without affecting spatial resolution. In this paper the effect of ASiR on the contrast to noise ratio is studied using the low contrast module of the Catphan phantom. The experiments were done on a GE LightSpeed VCT system at different voltages and currents. The results show reduced noise and increased contrast for the ASiR reconstructions compared to the standard FBP method. For the same contrast to noise ratio the images from ASiR can be obtained using 60% less current, leading to a reduction in dose of the same amount.

  3. Scheduled Relaxation Jacobi method: Improvements and applications

    NASA Astrophysics Data System (ADS)

    Adsuara, J. E.; Cordero-Carrión, I.; Cerdá-Durán, P.; Aloy, M. A.

    2016-09-01

    Elliptic partial differential equations (ePDEs) appear in a wide variety of areas of mathematics, physics and engineering. Typically, ePDEs must be solved numerically, which sets an ever growing demand for efficient and highly parallel algorithms to tackle their computational solution. The Scheduled Relaxation Jacobi (SRJ) is a promising class of methods, atypical for combining simplicity and efficiency, that has been recently introduced for solving linear Poisson-like ePDEs. The SRJ methodology relies on computing the appropriate parameters of a multilevel approach with the goal of minimizing the number of iterations needed to cut down the residuals below specified tolerances. The efficiency in the reduction of the residual increases with the number of levels employed in the algorithm. Applying the original methodology to compute the algorithm parameters with more than 5 levels notably hinders obtaining optimal SRJ schemes, as the mixed (non-linear) algebraic-differential system of equations from which they result becomes notably stiff. Here we present a new methodology for obtaining the parameters of SRJ schemes that overcomes the limitations of the original algorithm and provide parameters for SRJ schemes with up to 15 levels and resolutions of up to 215 points per dimension, allowing for acceleration factors larger than several hundreds with respect to the Jacobi method for typical resolutions and, in some high resolution cases, close to 1000. Most of the success in finding SRJ optimal schemes with more than 10 levels is based on an analytic reduction of the complexity of the previously mentioned system of equations. Furthermore, we extend the original algorithm to apply it to certain systems of non-linear ePDEs.

  4. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  5. On the connection between multigrid and cyclic reduction

    NASA Technical Reports Server (NTRS)

    Merriam, M. L.

    1984-01-01

    A technique is shown whereby it is possible to relate a particular multigrid process to cyclic reduction using purely mathematical arguments. This technique suggest methods for solving Poisson's equation in 1-, 2-, or 3-dimensions with Dirichlet or Neumann boundary conditions. In one dimension the method is exact and, in fact, reduces to cyclic reduction. This provides a valuable reference point for understanding multigrid techniques. The particular multigrid process analyzed is referred to here as Approximate Cyclic Reduction (ACR) and is one of a class known as Multigrid Reduction methods in the literature. It involves one approximation with a known error term. It is possible to relate the error term in this approximation with certain eigenvector components of the error. These are sharply reduced in amplitude by classical relaxation techniques. The approximation can thus be made a very good one.

  6. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  7. Dimension Reduction With Extreme Learning Machine.

    PubMed

    Kasun, Liyanaarachchi Lekamalage Chamara; Yang, Yan; Huang, Guang-Bin; Zhang, Zhengyou

    2016-08-01

    Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP), and auto-encoder (AE), is to reduce the noise or irrelevant information of the data. The features of PCA (eigenvectors) and linear AE are not able to represent data as parts (e.g. nose in a face image). On the other hand, NMF and non-linear AE are maimed by slow learning speed and RP only represents a subspace of original data. This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace. To this end, this paper investigates a linear and non-linear dimension reduction framework referred to as extreme learning machine AE (ELM-AE) and sparse ELM-AE (SELM-AE). In contrast to tied weight AE, the hidden neurons in ELM-AE and SELM-AE need not be tuned, and their parameters (e.g, input weights in additive neurons) are initialized using orthogonal and sparse random weights, respectively. Experimental results on USPS handwritten digit recognition data set, CIFAR-10 object recognition, and NORB object recognition data set show the efficacy of linear and non-linear ELM-AE and SELM-AE in terms of discriminative capability, sparsity, training time, and normalized mean square error.

  8. A method for reducing the largest relative errors in Monte Carlo iterated-fission-source calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, J. L.; Sutton, T. M.

    2013-07-01

    In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amountmore » of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)« less

  9. Decentralized control of sound radiation using iterative loop recovery.

    PubMed

    Schiller, Noah H; Cabell, Randolph H; Fuller, Chris R

    2010-10-01

    A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.

  10. Decentralized Control of Sound Radiation Using Iterative Loop Recovery

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.

    2009-01-01

    A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.

  11. DIII-D accomplishments and plans in support of fusion next steps

    DOE PAGES

    Buttery, R. J; Eidietis, N.; Holcomb, C.; ...

    2013-06-01

    DIII-D is using its flexibility and diagnostics to address the critical science required to enable next step fusion devices. We have adapted operating scenarios for ITER to low torque and are now being optimized for transport. Three ELM mitigation scenarios have been developed to near-ITER parameters. New control techniques are managing the most challenging plasma instabilities. Disruption mitigation tools show promising dissipation strategies for runaway electrons and heat load. An off axis neutral beam upgrade has enabled sustainment of high βN capable steady state regimes. Divertor research is identifying the challenge, physics and candidate solutions for handling the hot plasmamore » exhaust with notable progress in heat flux reduction using the snowflake configuration. Our work is helping optimize design choices and prepare the scientific tools for operation in ITER, and resolve key elements of the plasma configuration and divertor solution for an FNSF.« less

  12. Accurate numerical solution of the Helmholtz equation by iterative Lanczos reduction.

    PubMed

    Ratowsky, R P; Fleck, J A

    1991-06-01

    The Lanczos recursion algorithm is used to determine forward-propagating solutions for both the paraxial and Helmholtz wave equations for longitudinally invariant refractive indices. By eigenvalue analysis it is demonstrated that the method gives extremely accurate solutions to both equations.

  13. Solving regularly and singularly perturbed reaction-diffusion equations in three space dimensions

    NASA Astrophysics Data System (ADS)

    Moore, Peter K.

    2007-06-01

    In [P.K. Moore, Effects of basis selection and h-refinement on error estimator reliability and solution efficiency for higher-order methods in three space dimensions, Int. J. Numer. Anal. Mod. 3 (2006) 21-51] a fixed, high-order h-refinement finite element algorithm, Href, was introduced for solving reaction-diffusion equations in three space dimensions. In this paper Href is coupled with continuation creating an automatic method for solving regularly and singularly perturbed reaction-diffusion equations. The simple quasilinear Newton solver of Moore, (2006) is replaced by the nonlinear solver NITSOL [M. Pernice, H.F. Walker, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput. 19 (1998) 302-318]. Good initial guesses for the nonlinear solver are obtained using continuation in the small parameter ɛ. Two strategies allow adaptive selection of ɛ. The first depends on the rate of convergence of the nonlinear solver and the second implements backtracking in ɛ. Finally a simple method is used to select the initial ɛ. Several examples illustrate the effectiveness of the algorithm.

  14. On the primary variable switching technique for simulating unsaturated-saturated flows

    NASA Astrophysics Data System (ADS)

    Diersch, H.-J. G.; Perrochet, P.

    Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.

  15. [Fluoroscopy dose reduction of computed tomography guided chest interventional radiology using real-time iterative reconstruction].

    PubMed

    Hasegawa, Hiroaki; Mihara, Yoshiyuki; Ino, Kenji; Sato, Jiro

    2014-11-01

    The purpose of this study was to evaluate the radiation dose reduction to patients and radiologists in computed tomography (CT) guided examinations for the thoracic region using CT fluoroscopy. Image quality evaluation of the real-time filtered back-projection (RT-FBP) images and the real-time adaptive iterative dose reduction (RT-AIDR) images was carried out on noise and artifacts that were considered to affect the CT fluoroscopy. The image standard deviation was improved in the fluoroscopy setting with less than 30 mA on 120 kV. With regard to the evaluation of artifact visibility and the amount generated by the needle attached to the chest phantom, there was no significant difference between the RT-FBP images with 120 kV, 20 mA and the RT-AIDR images with low-dose conditions (greater than 80 kV, 30 mA and less than 120 kV, 20 mA). The results suggest that it is possible to reduce the radiation dose by approximately 34% at the maximum using RT-AIDR while maintaining image quality equivalent to the RT-FBP images with 120 V, 20 mA.

  16. Evidence of dose saving in routine CT practice using iterative reconstruction derived from a national diagnostic reference level survey.

    PubMed

    Thomas, P; Hayton, A; Beveridge, T; Marks, P; Wallace, A

    2015-09-01

    To assess the influence and significance of the use of iterative reconstruction (IR) algorithms on patient dose in CT in Australia. We examined survey data submitted to the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) National Diagnostic Reference Level Service (NDRLS) during 2013 and 2014. We compared median survey dose metrics with categorization by scan region and use of IR. The use of IR results in a reduction in volume CT dose index of between 17% and 44% and a reduction in dose-length product of between 14% and 34% depending on the specific scan region. The reduction was highly significant (p < 0.001, Wilcoxon rank-sum test) for all six scan regions included in the NDRLS. Overall, 69% (806/1167) of surveys included in the analysis used IR. The use of IR in CT is achieving dose savings of 20-30% in routine practice in Australia. IR appears to be widely used by participants in the ARPANSA NDRLS with approximately 70% of surveys submitted employing this technique. This study examines the impact of the use of IR on patient dose in CT on a national scale.

  17. Efficient and robust relaxation procedures for multi-component mixtures including phase transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Ee, E-mail: eehan@math.uni-bremen.de; Hantke, Maren, E-mail: maren.hantke@ovgu.de; Müller, Siegfried, E-mail: mueller@igpm.rwth-aachen.de

    We consider a thermodynamic consistent multi-component model in multi-dimensions that is a generalization of the classical two-phase flow model of Baer and Nunziato. The exchange of mass, momentum and energy between the phases is described by additional source terms. Typically these terms are handled by relaxation procedures. Available relaxation procedures suffer from efficiency and robustness resulting in very costly computations that in general only allow for one-dimensional computations. Therefore we focus on the development of new efficient and robust numerical methods for relaxation processes. We derive exact procedures to determine mechanical and thermal equilibrium states. Further we introduce a novelmore » iterative method to treat the mass transfer for a three component mixture. All new procedures can be extended to an arbitrary number of inert ideal gases. We prove existence, uniqueness and physical admissibility of the resulting states and convergence of our new procedures. Efficiency and robustness of the procedures are verified by means of numerical computations in one and two space dimensions. - Highlights: • We develop novel relaxation procedures for a generalized, thermodynamically consistent Baer–Nunziato type model. • Exact procedures for mechanical and thermal relaxation procedures avoid artificial parameters. • Existence, uniqueness and physical admissibility of the equilibrium states are proven for special mixtures. • A novel iterative method for mass transfer is introduced for a three component mixture providing a unique and admissible equilibrium state.« less

  18. Organizational interventions improving access to community-based primary health care for vulnerable populations: a scoping review.

    PubMed

    Khanassov, Vladimir; Pluye, Pierre; Descoteaux, Sarah; Haggerty, Jeannie L; Russell, Grant; Gunn, Jane; Levesque, Jean-Frederic

    2016-10-10

    Access to community-based primary health care (hereafter, 'primary care') is a priority in many countries. Health care systems have emphasized policies that help the community 'get the right service in the right place at the right time'. However, little is known about organizational interventions in primary care that are aimed to improve access for populations in situations of vulnerability (e.g., socioeconomically disadvantaged) and how successful they are. The purpose of this scoping review was to map the existing evidence on organizational interventions that improve access to primary care services for vulnerable populations. Scoping review followed an iterative process. Eligibility criteria: organizational interventions in Organisation for Economic Cooperation and Development (OECD) countries; aiming to improve access to primary care for vulnerable populations; all study designs; published from 2000 in English or French; reporting at least one outcome (avoidable hospitalization, emergency department admission, or unmet health care needs). Main bibliographic databases (Medline, Embase, CINAHL) and team members' personal files. One researcher selected relevant abstracts and full text papers. Theory-driven synthesis: The researcher classified included studies using (i) the 'Patient Centered Access to Healthcare' conceptual framework (dimensions and outcomes of access to primary care), and (ii) the classification of interventions of the Cochrane Effective Practice and Organization of Care. Using pattern analysis, interventions were mapped in accordance with the presence/absence of 'dimension-outcome' patterns. Out of 8,694 records (title/abstract), 39 studies with varying designs were included. The analysis revealed the following pattern. Results of 10 studies on interventions classified as 'Formal integration of services' suggested that these interventions were associated with three dimensions of access (approachability, availability and affordability) and reduction of hospitalizations (four/four studies), emergency department admissions (six/six studies), and unmet healthcare needs (five/six studies). These 10 studies included seven non-randomized studies, one randomized controlled trial, one quantitative descriptive study, and one mixed methods study. Our results suggest the limited breadth of research in this area, and that it will be feasible to conduct a full systematic review of studies on the effectiveness of the formal integration of services to improve access to primary care services for vulnerable populations.

  19. SU-E-I-33: Initial Evaluation of Model-Based Iterative CT Reconstruction Using Standard Image Quality Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gingold, E; Dave, J

    2014-06-01

    Purpose: The purpose of this study was to compare a new model-based iterative reconstruction with existing reconstruction methods (filtered backprojection and basic iterative reconstruction) using quantitative analysis of standard image quality phantom images. Methods: An ACR accreditation phantom (Gammex 464) and a CATPHAN600 phantom were scanned using 3 routine clinical acquisition protocols (adult axial brain, adult abdomen, and pediatric abdomen) on a Philips iCT system. Each scan was acquired using default conditions and 75%, 50% and 25% dose levels. Images were reconstructed using standard filtered backprojection (FBP), conventional iterative reconstruction (iDose4) and a prototype model-based iterative reconstruction (IMR). Phantom measurementsmore » included CT number accuracy, contrast to noise ratio (CNR), modulation transfer function (MTF), low contrast detectability (LCD), and noise power spectrum (NPS). Results: The choice of reconstruction method had no effect on CT number accuracy, or MTF (p<0.01). The CNR of a 6 HU contrast target was improved by 1–67% with iDose4 relative to FBP, while IMR improved CNR by 145–367% across all protocols and dose levels. Within each scan protocol, the CNR improvement from IMR vs FBP showed a general trend of greater improvement at lower dose levels. NPS magnitude was greatest for FBP and lowest for IMR. The NPS of the IMR reconstruction showed a pronounced decrease with increasing spatial frequency, consistent with the unusual noise texture seen in IMR images. Conclusion: Iterative Model Reconstruction reduces noise and improves contrast-to-noise ratio without sacrificing spatial resolution in CT phantom images. This offers the possibility of radiation dose reduction and improved low contrast detectability compared with filtered backprojection or conventional iterative reconstruction.« less

  20. The use of adaptive statistical iterative reconstruction in pediatric head CT: a feasibility study.

    PubMed

    Vorona, G A; Zuccoli, G; Sutcavage, T; Clayton, B L; Ceschin, R C; Panigrahy, A

    2013-01-01

    Iterative reconstruction techniques facilitate CT dose reduction; though to our knowledge, no group has explored using iterative reconstruction with pediatric head CT. Our purpose was to perform a feasibility study to assess the use of ASIR in a small group of pediatric patients undergoing head CT. An Alderson-Rando head phantom was scanned at decreasing 10% mA intervals relative to our standard protocol, and each study was then reconstructed at 10% ASIR intervals. An intracranial region of interest was consistently placed to estimate noise. Our ventriculoperitoneal shunt CT protocol was subsequently modified, and patients were scanned at 20% ASIR with approximately 20% mA reductions. ASIR studies were anonymously compared with older non-ASIR studies from the same patients by 2 attending pediatric neuroradiologists for diagnostic utility, sharpness, noise, and artifacts. The phantom study demonstrated similar noise at 100% mA/0% ASIR (3.9) and 80% mA/20% ASIR (3.7). Twelve pediatric patients were scanned at reduced dose at 20% ASIR. The average CTDI(vol) and DLP values of the 20% ASIR studies were 22.4 mGy and 338.4 mGy-cm, and for the non-ASIR studies, they were 28.8 mGy and 444.5 mGy-cm, representing statistically significant decreases in the CTDI(vol) (22.1%, P = .00007) and DLP (23.9%, P = .0005) values. There were no significant differences between the ASIR studies and non-ASIR studies with respect to diagnostic acceptability, sharpness, noise, or artifacts. Our findings suggest that 20% ASIR can provide approximately 22% dose reduction in pediatric head CT without affecting image quality.

  1. Reduction of metal artifacts due to dental hardware in computed tomography angiography: assessment of the utility of model-based iterative reconstruction.

    PubMed

    Kuya, Keita; Shinohara, Yuki; Kato, Ayumi; Sakamoto, Makoto; Kurosaki, Masamichi; Ogawa, Toshihide

    2017-03-01

    The aim of this study is to assess the value of adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) for reduction of metal artifacts due to dental hardware in carotid CT angiography (CTA). Thirty-seven patients with dental hardware who underwent carotid CTA were included. CTA was performed with a GE Discovery CT750 HD scanner and reconstructed with filtered back projection (FBP), ASIR, and MBIR. We measured the standard deviation at the cervical segment of the internal carotid artery that was affected most by dental metal artifacts (SD 1 ) and the standard deviation at the common carotid artery that was not affected by the artifact (SD 2 ). We calculated the artifact index (AI) as follows: AI = [(SD 1 )2 - (SD 2 )2]1/2 and compared each AI for FBP, ASIR, and MBIR. Visual assessment of the internal carotid artery was also performed by two neuroradiologists using a five-point scale for each axial and reconstructed sagittal image. The inter-observer agreement was analyzed using weighted kappa analysis. MBIR significantly improved AI compared with FBP and ASIR (p < 0.001, each). We found no significant difference in AI between FBP and ASIR (p = 0.502). The visual score of MBIR was significantly better than those of FBP and ASIR (p < 0.001, each), whereas the scores of ASIR were the same as those of FBP. Kappa values indicated good inter-observer agreements in all reconstructed images (0.747-0.778). MBIR resulted in a significant reduction in artifact from dental hardware in carotid CTA.

  2. A second chance: meanings of body weight, diet, and physical activity to women who have experienced cancer.

    PubMed

    Maley, Mary; Warren, Barbour S; Devine, Carol M

    2013-01-01

    To understand the meanings of diet, physical activity, and body weight in the context of women's cancer experiences. Grounded theory using 15 qualitative interviews and 3 focus groups. Grassroots community cancer organizations in the northeastern United States. Thirty-six white women cancer survivors; 86% had experienced breast cancer. Participants' views of the meanings of body weight, diet, and physical activity in the context of the cancer. Procedures adapted from the constant comparative method of qualitative analysis using iterative open coding. Themes emerged along 3 intersecting dimensions: vulnerability and control, stress and living well, and uncertainty and confidence. Diet and body weight were seen as sources of increased vulnerability and distress. Uncertainty about diet heightened distress and lack of control. Physical activity was seen as a way to regain control and reduce distress. Emergent themes of vulnerability-control, stress-living well, and uncertainty-confidence may aid in understanding and promoting health behaviors in the growing population of cancer survivors. Messages that resonated with participants included taking ownership over one's body, physical activity as stress reduction, healthy eating for overall health and quality of life, and a second chance to get it right. Copyright © 2013 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  3. Fresnel transform phase retrieval from magnitude.

    PubMed

    Pitts, Todd A; Greenleaf, James F

    2003-08-01

    This report presents a generalized projection method for recovering the phase of a finite support, two-dimensional signal from knowledge of its magnitude in the spatial position and Fresnel transform domains. We establish the uniqueness of sampled monochromatic scalar field phase given Fresnel transform magnitude and finite region of support constraints for complex signals. We derive an optimally relaxed version of the algorithm resulting in a significant reduction in the number of iterations needed to obtain useful results. An advantage of using the Fresnel transform (as opposed to Fourier) for measurement is that the shift-invariance of the transform operator implies retention of object location information in the transformed image magnitude. As a practical application in the context of ultrasound beam measurement we discuss the determination of small optical phase shifts from near field optical intensity distributions. Experimental data are used to reconstruct the phase shape of an optical field immediately after propagating through a wide bandwidth ultrasonic pulse. The phase of each point on the optical wavefront is proportional to the ray sum of pressure through the ultrasound pulse (assuming low ultrasonic intensity). An entire pressure field was reconstructed in three dimensions and compared with a calibrated hydrophone measurement. The comparison is excellent, demonstrating that the phase retrieval is quantitative.

  4. Upgrade of Langmuir probe diagnostic in ITER-like tungsten mono-block divertor on experimental advanced superconducting tokamak.

    PubMed

    Xu, J C; Wang, L; Xu, G S; Luo, G N; Yao, D M; Li, Q; Cao, L; Chen, L; Zhang, W; Liu, S C; Wang, H Q; Jia, M N; Feng, W; Deng, G Z; Hu, L Q; Wan, B N; Li, J; Sun, Y W; Guo, H Y

    2016-08-01

    In order to withstand rapid increase in particle and power impact onto the divertor and demonstrate the feasibility of the ITER design under long pulse operation, the upper divertor of the EAST tokamak has been upgraded to actively water-cooled, ITER-like tungsten mono-block structure since the 2014 campaign, which is the first attempt for ITER on the tokamak devices. Therefore, a new divertor Langmuir probe diagnostic system (DivLP) was designed and successfully upgraded on the tungsten divertor to obtain the plasma parameters in the divertor region such as electron temperature, electron density, particle and heat fluxes. More specifically, two identical triple probe arrays have been installed at two ports of different toroidal positions (112.5-deg separated toroidally), which can provide fundamental data to study the toroidal asymmetry of divertor power deposition and related 3-dimension (3D) physics, as induced by resonant magnetic perturbations, lower hybrid wave, and so on. The shape of graphite tip and fixed structure of the probe are designed according to the structure of the upper tungsten divertor. The ceramic support, small graphite tip, and proper connector installed make it possible to be successfully installed in the very narrow interval between the cassette body and tungsten mono-block, i.e., 13.5 mm. It was demonstrated during the 2014 and 2015 commissioning campaigns that the newly upgraded divertor Langmuir probe diagnostic system is successful. Representative experimental data are given and discussed for the DivLP measurements, then proving its availability and reliability.

  5. How does culture affect experiential training feedback in exported Canadian health professional curricula?

    PubMed Central

    Mousa Bacha, Rasha; Abdelaziz, Somaia

    2017-01-01

    Objectives To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. Methods This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. Results Document analysis found all programs’ ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar.  Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. Conclusions Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences. PMID:28315858

  6. Upgrade of Langmuir probe diagnostic in ITER-like tungsten mono-block divertor on experimental advanced superconducting tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J. C.; Jia, M. N.; Feng, W.

    2016-08-15

    In order to withstand rapid increase in particle and power impact onto the divertor and demonstrate the feasibility of the ITER design under long pulse operation, the upper divertor of the EAST tokamak has been upgraded to actively water-cooled, ITER-like tungsten mono-block structure since the 2014 campaign, which is the first attempt for ITER on the tokamak devices. Therefore, a new divertor Langmuir probe diagnostic system (DivLP) was designed and successfully upgraded on the tungsten divertor to obtain the plasma parameters in the divertor region such as electron temperature, electron density, particle and heat fluxes. More specifically, two identical triplemore » probe arrays have been installed at two ports of different toroidal positions (112.5-deg separated toroidally), which can provide fundamental data to study the toroidal asymmetry of divertor power deposition and related 3-dimension (3D) physics, as induced by resonant magnetic perturbations, lower hybrid wave, and so on. The shape of graphite tip and fixed structure of the probe are designed according to the structure of the upper tungsten divertor. The ceramic support, small graphite tip, and proper connector installed make it possible to be successfully installed in the very narrow interval between the cassette body and tungsten mono-block, i.e., 13.5 mm. It was demonstrated during the 2014 and 2015 commissioning campaigns that the newly upgraded divertor Langmuir probe diagnostic system is successful. Representative experimental data are given and discussed for the DivLP measurements, then proving its availability and reliability.« less

  7. How does culture affect experiential training feedback in exported Canadian health professional curricula?

    PubMed

    Wilbur, Kerry; Mousa Bacha, Rasha; Abdelaziz, Somaia

    2017-03-17

    To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. Document analysis found all programs' ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar.  Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences.

  8. Three-dimension reconstruction based on spatial light modulator

    NASA Astrophysics Data System (ADS)

    Deng, Xuejiao; Zhang, Nanyang; Zeng, Yanan; Yin, Shiliang; Wang, Weiyu

    2011-02-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  9. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part I

    NASA Technical Reports Server (NTRS)

    Boyer, Charles M.; Jackson, Trevor P.; Beyon, Jeffrey Y.; Petway, Larry B.

    2013-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Mechanical placement collaboration reduced potential electromagnetic interference (EMI). Through application of newly selected electrical components and thermal analysis data, a total electronic chassis redesign was accomplished. Use of an innovative forced convection tunnel heat sink was employed to meet and exceed project requirements for cooling, mass reduction, and volume reduction. Functionality was a key concern to make efficient use of airflow, and accessibility was also imperative to allow for servicing of chassis internals. The collaborative process provided for accelerated design maturation with substantiated function.

  10. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR).

    PubMed

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan

    2013-11-01

    Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables. Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision. Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A. The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of accuracy on reconstruction algorithms, such that volumes quantified from scans of different reconstruction algorithms can be compared. The little difference found between the precision of FBP and iterative reconstructions could be a result of both iterative reconstruction's diminished noise reduction at the edge of the nodules as well as the loss of resolution at high noise levels with iterative reconstruction. The findings do not rule out potential advantage of IR that might be evident in a study that uses a larger number of nodules or repeated scans.

  11. Ghost suppression in image restoration filtering

    NASA Technical Reports Server (NTRS)

    Riemer, T. E.; Mcgillem, C. D.

    1975-01-01

    An optimum image restoration filter is described in which provision is made to constrain the spatial extent of the restoration function, the noise level of the filter output and the rate of falloff of the composite system point-spread away from the origin. Experimental results show that sidelobes on the composite system point-spread function produce ghosts in the restored image near discontinuities in intensity level. By redetermining the filter using a penalty function that is zero over the main lobe of the composite point-spread function of the optimum filter and nonzero where the point-spread function departs from a smoothly decaying function in the sidelobe region, a great reduction in sidelobe level is obtained. Almost no loss in resolving power of the composite system results from this procedure. By iteratively carrying out the same procedure even further reductions in sidelobe level are obtained. Examples of original and iterated restoration functions are shown along with their effects on a test image.

  12. Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range

    NASA Technical Reports Server (NTRS)

    Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.

  13. A multi-level solution algorithm for steady-state Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

  14. Experimental validation of an OSEM-type iterative reconstruction algorithm for inverse geometry computed tomography

    NASA Astrophysics Data System (ADS)

    David, Sabrina; Burion, Steve; Tepe, Alan; Wilfley, Brian; Menig, Daniel; Funk, Tobias

    2012-03-01

    Iterative reconstruction methods have emerged as a promising avenue to reduce dose in CT imaging. Another, perhaps less well-known, advance has been the development of inverse geometry CT (IGCT) imaging systems, which can significantly reduce the radiation dose delivered to a patient during a CT scan compared to conventional CT systems. Here we show that IGCT data can be reconstructed using iterative methods, thereby combining two novel methods for CT dose reduction. A prototype IGCT scanner was developed using a scanning beam digital X-ray system - an inverse geometry fluoroscopy system with a 9,000 focal spot x-ray source and small photon counting detector. 90 fluoroscopic projections or "superviews" spanning an angle of 360 degrees were acquired of an anthropomorphic phantom mimicking a 1 year-old boy. The superviews were reconstructed with a custom iterative reconstruction algorithm, based on the maximum-likelihood algorithm for transmission tomography (ML-TR). The normalization term was calculated based on flat-field data acquired without a phantom. 15 subsets were used, and a total of 10 complete iterations were performed. Initial reconstructed images showed faithful reconstruction of anatomical details. Good edge resolution and good contrast-to-noise properties were observed. Overall, ML-TR reconstruction of IGCT data collected by a bench-top prototype was shown to be viable, which may be an important milestone in the further development of inverse geometry CT.

  15. Reduction of Large Dynamical Systems by Minimization of Evolution Rate

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.

    1999-01-01

    Reduction of a large system of equations to a lower-dimensional system of similar dynamics is investigated. For dynamical systems with disparate timescales, a criterion for determining redundant dimensions and a general reduction method based on the minimization of evolution rate are proposed.

  16. Tensor network method for reversible classical computation

    NASA Astrophysics Data System (ADS)

    Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.

    2018-03-01

    We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.

  17. Dimension reduction of frequency-based direct Granger causality measures on short time series.

    PubMed

    Siggiridou, Elsa; Kimiskidis, Vasilios K; Kugiumtzis, Dimitris

    2017-09-01

    The mainstream in the estimation of effective brain connectivity relies on Granger causality measures in the frequency domain. If the measure is meant to capture direct causal effects accounting for the presence of other observed variables, as in multi-channel electroencephalograms (EEG), typically the fit of a vector autoregressive (VAR) model on the multivariate time series is required. For short time series of many variables, the estimation of VAR may not be stable requiring dimension reduction resulting in restricted or sparse VAR models. The restricted VAR obtained by the modified backward-in-time selection method (mBTS) is adapted to the generalized partial directed coherence (GPDC), termed restricted GPDC (RGPDC). Dimension reduction on other frequency based measures, such the direct directed transfer function (dDTF), is straightforward. First, a simulation study using linear stochastic multivariate systems is conducted and RGPDC is favorably compared to GPDC on short time series in terms of sensitivity and specificity. Then the two measures are tested for their ability to detect changes in brain connectivity during an epileptiform discharge (ED) from multi-channel scalp EEG. It is shown that RGPDC identifies better than GPDC the connectivity structure of the simulated systems, as well as changes in the brain connectivity, and is less dependent on the free parameter of VAR order. The proposed dimension reduction in frequency measures based on VAR constitutes an appropriate strategy to estimate reliably brain networks within short-time windows. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Classification of molecular structure images by using ANN, RF, LBP, HOG, and size reduction methods for early stomach cancer detection

    NASA Astrophysics Data System (ADS)

    Aytaç Korkmaz, Sevcan; Binol, Hamidullah

    2018-03-01

    Patients who die from stomach cancer are still present. Early diagnosis is crucial in reducing the mortality rate of cancer patients. Therefore, computer aided methods have been developed for early detection in this article. Stomach cancer images were obtained from Fırat University Medical Faculty Pathology Department. The Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) features of these images are calculated. At the same time, Sammon mapping, Stochastic Neighbor Embedding (SNE), Isomap, Classical multidimensional scaling (MDS), Local Linear Embedding (LLE), Linear Discriminant Analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Laplacian Eigenmaps methods are used for dimensional the reduction of the features. The high dimension of these features has been reduced to lower dimensions using dimensional reduction methods. Artificial neural networks (ANN) and Random Forest (RF) classifiers were used to classify stomach cancer images with these new lower feature sizes. New medical systems have developed to measure the effects of these dimensions by obtaining features in different dimensional with dimensional reduction methods. When all the methods developed are compared, it has been found that the best accuracy results are obtained with LBP_MDS_ANN and LBP_LLE_ANN methods.

  19. Image quality of iterative reconstruction in cranial CT imaging: comparison of model-based iterative reconstruction (MBIR) and adaptive statistical iterative reconstruction (ASiR).

    PubMed

    Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S

    2015-01-01

    The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p < 0.01). The median depiction score for MBIR was 3, whereas the median value for ASiR was 2 (p < 0.01). SNR and CNR were significantly higher in MBIR than ASiR (p < 0.01). MBIR showed significant improvement of IQ parameters compared to ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.

  20. Hybrid numerical method for solution of the radiative transfer equation in one, two, or three dimensions.

    PubMed

    Reinersman, Phillip N; Carder, Kendall L

    2004-05-01

    A hybrid method is presented by which Monte Carlo (MC) techniques are combined with an iterative relaxation algorithm to solve the radiative transfer equation in arbitrary one-, two-, or three-dimensional optical environments. The optical environments are first divided into contiguous subregions, or elements. MC techniques are employed to determine the optical response function of each type of element. The elements are combined, and relaxation techniques are used to determine simultaneously the radiance field on the boundary and throughout the interior of the modeled environment. One-dimensional results compare well with a standard radiative transfer model. The light field beneath and adjacent to a long barge is modeled in two dimensions and displayed. Ramifications for underwater video imaging are discussed. The hybrid model is currently capable of providing estimates of the underwater light field needed to expedite inspection of ship hulls and port facilities.

  1. The Responsive Environmental Assessment for Classroom Teaching (REACT): the dimensionality of student perceptions of the instructional environment.

    PubMed

    Nelson, Peter M; Demers, Joseph A; Christ, Theodore J

    2014-06-01

    This study details the initial development of the Responsive Environmental Assessment for Classroom Teachers (REACT). REACT was developed as a questionnaire to evaluate student perceptions of the classroom teaching environment. Researchers engaged in an iterative process to develop, field test, and analyze student responses on 100 rating-scale items. Participants included 1,465 middle school students across 48 classrooms in the Midwest. Item analysis, including exploratory and confirmatory factor analysis, was used to refine a 27-item scale with a second-order factor structure. Results support the interpretation of a single general dimension of the Classroom Teaching Environment with 6 subscale dimensions: Positive Reinforcement, Instructional Presentation, Goal Setting, Differentiated Instruction, Formative Feedback, and Instructional Enjoyment. Applications of REACT in research and practice are discussed along with implications for future research and the development of classroom environment measures. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  2. Perspective: Optical measurement of feature dimensions and shapes by scatterometry

    NASA Astrophysics Data System (ADS)

    Diebold, Alain C.; Antonelli, Andy; Keller, Nick

    2018-05-01

    The use of optical scattering to measure feature shape and dimensions, scatterometry, is now routine during semiconductor manufacturing. Scatterometry iteratively improves an optical model structure using simulations that are compared to experimental data from an ellipsometer. These simulations are done using the rigorous coupled wave analysis for solving Maxwell's equations. In this article, we describe the Mueller matrix spectroscopic ellipsometry based scatterometry. Next, the rigorous coupled wave analysis for Maxwell's equations is presented. Following this, several example measurements are described as they apply to specific process steps in the fabrication of gate-all-around (GAA) transistor structures. First, simulations of measurement sensitivity for the inner spacer etch back step of horizontal GAA transistor processing are described. Next, the simulated metrology sensitivity for sacrificial (dummy) amorphous silicon etch back step of vertical GAA transistor processing is discussed. Finally, we present the application of plasmonically active test structures for improving the sensitivity of the measurement of metal linewidths.

  3. Multigrid methods for isogeometric discretization

    PubMed Central

    Gahalaut, K.P.S.; Kraus, J.K.; Tomar, S.K.

    2013-01-01

    We present (geometric) multigrid methods for isogeometric discretization of scalar second order elliptic problems. The smoothing property of the relaxation method, and the approximation property of the intergrid transfer operators are analyzed. These properties, when used in the framework of classical multigrid theory, imply uniform convergence of two-grid and multigrid methods. Supporting numerical results are provided for the smoothing property, the approximation property, convergence factor and iterations count for V-, W- and F-cycles, and the linear dependence of V-cycle convergence on the smoothing steps. For two dimensions, numerical results include the problems with variable coefficients, simple multi-patch geometry, a quarter annulus, and the dependence of convergence behavior on refinement levels ℓ, whereas for three dimensions, only the constant coefficient problem in a unit cube is considered. The numerical results are complete up to polynomial order p=4, and for C0 and Cp-1 smoothness. PMID:24511168

  4. Small-angle scattering from 3D Sierpinski tetrahedron generated using chaos game

    NASA Astrophysics Data System (ADS)

    Slyamov, Azat

    2017-12-01

    We approximate a three dimensional version of deterministic Sierpinski gasket (SG), also known as Sierpinski tetrahedron (ST), by using the chaos game representation (CGR). Structural properties of the fractal, generated by both deterministic and CGR algorithms are determined using small-angle scattering (SAS) technique. We calculate the corresponding monodisperse structure factor of ST, using an optimized Debye formula. We show that scattering from CGR of ST recovers basic fractal properties, such as fractal dimension, iteration number, scaling factor, overall size of the system and the number of units composing the fractal.

  5. Framework for three-dimensional coherent diffraction imaging by focused beam x-ray Bragg ptychography.

    PubMed

    Hruszkewycz, Stephan O; Holt, Martin V; Tripathi, Ash; Maser, Jörg; Fuoss, Paul H

    2011-06-15

    We present the framework for convergent beam Bragg ptychography, and, using simulations, we demonstrate that nanocrystals can be ptychographically reconstructed from highly convergent x-ray Bragg diffraction. The ptychographic iterative engine is extended to three dimensions and shown to successfully reconstruct a simulated nanocrystal using overlapping raster scans with a defocused curved beam, the diameter of which matches the crystal size. This object reconstruction strategy can serve as the basis for coherent diffraction imaging experiments at coherent scanning nanoprobe x-ray sources.

  6. Iterative blip-summed path integral for quantum dynamics in strongly dissipative environments

    NASA Astrophysics Data System (ADS)

    Makri, Nancy

    2017-04-01

    The iterative decomposition of the blip-summed path integral [N. Makri, J. Chem. Phys. 141, 134117 (2014)] is described. The starting point is the expression of the reduced density matrix for a quantum system interacting with a harmonic dissipative bath in the form of a forward-backward path sum, where the effects of the bath enter through the Feynman-Vernon influence functional. The path sum is evaluated iteratively in time by propagating an array that stores blip configurations within the memory interval. Convergence with respect to the number of blips and the memory length yields numerically exact results which are free of statistical error. In situations of strongly dissipative, sluggish baths, the algorithm leads to a dramatic reduction of computational effort in comparison with iterative path integral methods that do not implement the blip decomposition. This gain in efficiency arises from (i) the rapid convergence of the blip series and (ii) circumventing the explicit enumeration of between-blip path segments, whose number grows exponentially with the memory length. Application to an asymmetric dissipative two-level system illustrates the rapid convergence of the algorithm even when the bath memory is extremely long.

  7. X-ray dose reduction in abdominal computed tomography using advanced iterative reconstruction algorithms.

    PubMed

    Ning, Peigang; Zhu, Shaocheng; Shi, Dapeng; Guo, Ying; Sun, Minghua

    2014-01-01

    This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) algorithms in reducing computed tomography (CT) radiation dosages in abdominal imaging. CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP), 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs) of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol) were recorded. At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively.

  8. Minimizing Cache Misses Using Minimum-Surface Bodies

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; VanderWijngaart, Rob; Biegel, Bryan (Technical Monitor)

    2002-01-01

    A number of known techniques for improving cache performance in scientific computations involve the reordering of the iteration space. Some of these reorderings can be considered as coverings of the iteration space with the sets having good surface-to-volume ratio. Use of such sets reduces the number of cache misses in computations of local operators having the iteration space as a domain. First, we derive lower bounds which any algorithm must suffer while computing a local operator on a grid. Then we explore coverings of iteration spaces represented by structured and unstructured grids which allow us to approach these lower bounds. For structured grids we introduce a covering by successive minima tiles of the interference lattice of the grid. We show that the covering has low surface-to-volume ratio and present a computer experiment showing actual reduction of the cache misses achieved by using these tiles. For planar unstructured grids we show existence of a covering which reduces the number of cache misses to the level of structured grids. On the other hand, we present a triangulation of a 3-dimensional cube such that any local operator on the corresponding grid has significantly larger number of cache misses than a similar operator on a structured grid.

  9. Chaotic behavior of renal sympathetic nerve activity: effect of baroreceptor denervation and cardiac failure.

    PubMed

    DiBona, G F; Jones, S Y; Sawin, L L

    2000-09-01

    Nonlinear dynamic analysis was used to examine the chaotic behavior of renal sympathetic nerve activity in conscious rats subjected to either complete baroreceptor denervation (sinoaortic and cardiac baroreceptor denervation) or induction of congestive heart failure (CHF). The peak interval sequence of synchronized renal sympathetic nerve discharge was extracted and used for analysis. In control rats, this yielded a system whose correlation dimension converged to a low value over the embedding dimension range of 10-15 and whose greatest Lyapunov exponent was positive. Complete baroreceptor denervation was associated with a decrease in the correlation dimension of the system (before 2.65 +/- 0.27, after 1.64 +/- 0.17; P < 0.01) and a reduction in chaotic behavior (greatest Lyapunov exponent: 0.201 +/- 0.008 bits/data point before, 0.177 +/- 0.004 bits/data point after, P < 0.02). CHF, a state characterized by impaired sinoaortic and cardiac baroreceptor regulation of renal sympathetic nerve activity, was associated with a similar decrease in the correlation dimension (control 3.41 +/- 0.23, CHF 2.62 +/- 0.26; P < 0.01) and a reduction in chaotic behavior (greatest Lyapunov exponent: 0.205 +/- 0.048 bits/data point control, 0.136 +/- 0.033 bits/data point CHF, P < 0.02). These results indicate that removal of sinoaortic and cardiac baroreceptor regulation of renal sympathetic nerve activity, occurring either physiologically or pathophysiologically, is associated with a decrease in the correlation dimensions of the system and a reduction in chaotic behavior.

  10. Accelerated iterative beam angle selection in IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bangert, Mark, E-mail: m.bangert@dkfz.de; Unkelbach, Jan

    2016-03-15

    Purpose: Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n − 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based onmore » surrogates for the FMO objective function value. Methods: We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Results: Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. Conclusions: We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.« less

  11. Accelerated iterative beam angle selection in IMRT.

    PubMed

    Bangert, Mark; Unkelbach, Jan

    2016-03-01

    Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n - 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based on surrogates for the FMO objective function value. We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.

  12. Flow balancing orifice for ITER toroidal field coil

    NASA Astrophysics Data System (ADS)

    Litvinovich, A. V.; Y Rodin, I.; Kovalchuk, O. A.; Safonov, A. V.; Stepanov, D. B.; Guryeva, T. M.

    2017-12-01

    Flow balancing orifices (FBOs) are used in in International thermonuclear experimental reactor (ITER) Toroidal Field coil to uniform flow rate of cooling gas in the side double pancakes which have a different conductor length: 99 m and 305 m, respectively. FBOs consist of straight parts, elbows produced from a 316L stainless steel tube 21.34 x 2.11 mm and orifices made from a 316L stainless steel rod. Each of right and left FBOs contains 6 orifices, straight FBOs contain 4 and 6 orifices. Before manufacturing of qualification samples D.V. Efremov Institute of Electrophysical Apparatus (JSC NIIEFA) proposed to ITER a new approach to provide the seamless connection between a tube and a plate therefore the most critical weld between the orifice with 1 mm thickness and the tube removed from the FBOs final design. The proposed orifice diameter is three times less than the minimum requirement of the ISO 5167, therefore it was tasked to define accuracy of calculation flow characteristics at room temperature and compare with the experimental data. In 2015 the qualification samples of flow balancing orifices were produced and tested. The results of experimental data showed that the deviation of calculated data is less than 7%. Based on this result and other tests ITER approved the design of FBOs, which made it possible to start the serial production. In 2016 JSC NIIEFA delivered 50 FBOs to ITER, i.e. 24 left side, 24 right side and 2 straight FBOs. In order to define the quality of FBOs the test facility in JSC NIIEFA was prepared. The helium tightness test at 10-9 m3·Pa/s the pressure up to 3 MPa, flow rate measuring at the various pressure drops, the non-destructive tests of orifices and weld seams (ISO 5817, class B) were conducted. Other tests such as check dimensions and thermo cycling 300 - 80 - 300 K also were carried out for each FBO.

  13. The Impact of Symptom Dimensions on Outcome for Exposure and Ritual Prevention Therapy in Obsessive-Compulsive Disorder

    PubMed Central

    Williams, Monnica T.; Farris, Samantha G.; Turkheimer, Eric N.; Franklin, Martin E.; Simpson, H. Blair; Liebowitz, Michael; Foa, Edna B.

    2014-01-01

    Objective Obsessive-compulsive disorder (OCD) is a severe condition with varied symptom presentations. The behavioral treatment with the most empirical support is exposure and ritual prevention (EX/RP). This study examined the impact of symptom dimensions on EX/RP outcomes in OCD patients. Method The Yale-Brown Obsessive-Compulsive Scale (Y-BOCS) was used to determine primary symptoms for each participant. An exploratory factor analysis (EFA) of 238 patients identified five dimensions: contamination/cleaning, doubts about harm/checking, hoarding, symmetry/ordering, and unacceptable/taboo thoughts (including religious/moral and somatic obsessions among others). A linear regression was conducted on those who had received EX/RP (n = 87) to examine whether scores on the five symptom dimensions predicted post-treatment Y-BOCS scores, accounting for pre-treatment Y-BOCS scores. Results The average reduction in Y-BOCS score was 43.0%, however the regression indicated that unacceptable/taboo thoughts (β = .27, p = .02) and hoarding dimensions (β = .23, p = .04) were associated with significantly poorer EX/RP treatment outcomes. Specifically, patients endorsing religious/moral obsessions, somatic concerns, and hoarding obsessions showed significantly smaller reductions in Y-BOCS severity scores. Conclusions EX/RP was effective for all symptom dimensions, however it was less effective for unacceptable/taboo thoughts and hoarding than for other dimensions. Clinical implications and directions for research are discussed. PMID:24983796

  14. The impact of symptom dimensions on outcome for exposure and ritual prevention therapy in obsessive-compulsive disorder.

    PubMed

    Williams, Monnica T; Farris, Samantha G; Turkheimer, Eric N; Franklin, Martin E; Simpson, H Blair; Liebowitz, Michael; Foa, Edna B

    2014-08-01

    Obsessive-compulsive disorder (OCD) is a severe condition with varied symptom presentations. The behavioral treatment with the most empirical support is exposure and ritual prevention (EX/RP). This study examined the impact of symptom dimensions on EX/RP outcomes in OCD patients. The Yale-Brown Obsessive-Compulsive Scale (Y-BOCS) was used to determine primary symptoms for each participant. An exploratory factor analysis (EFA) of 238 patients identified five dimensions: contamination/cleaning, doubts about harm/checking, hoarding, symmetry/ordering, and unacceptable/taboo thoughts (including religious/moral and somatic obsessions among others). A linear regression was conducted on those who had received EX/RP (n=87) to examine whether scores on the five symptom dimensions predicted post-treatment Y-BOCS scores, accounting for pre-treatment Y-BOCS scores. The average reduction in Y-BOCS score was 43.0%, however the regression indicated that unacceptable/taboo thoughts (β=.27, p=.02) and hoarding dimensions (β=.23, p=.04) were associated with significantly poorer EX/RP treatment outcomes. Specifically, patients endorsing religious/moral obsessions, somatic concerns, and hoarding obsessions showed significantly smaller reductions in Y-BOCS severity scores. EX/RP was effective for all symptom dimensions, however it was less effective for unacceptable/taboo thoughts and hoarding than for other dimensions. Clinical implications and directions for research are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. The SCUBA-2 Data Reduction Cookbook

    NASA Astrophysics Data System (ADS)

    Thomas, Holly S.; Currie, Malcolm J.

    This cookbook provides a short introduction to Starlink facilities, especially SMURF, the Sub-Millimetre User Reduction Facility, for reducing, displaying, and calibrating SCUBA-2 data. It describes some of the data artefacts present in SCUBA-2 time-series and methods to mitigate them. In particular, this cookbook illustrates the various steps required to reduce the data; and gives an overview of the Dynamic Iterative Map-Maker, which carries out all of these steps using a single command controlled by a configuration file. Specialised configuration files are presented.

  16. Optimization of Selected Remote Sensing Algorithms for Embedded NVIDIA Kepler GPU Architecture

    NASA Technical Reports Server (NTRS)

    Riha, Lubomir; Le Moigne, Jacqueline; El-Ghazawi, Tarek

    2015-01-01

    This paper evaluates the potential of embedded Graphic Processing Units in the Nvidias Tegra K1 for onboard processing. The performance is compared to a general purpose multi-core CPU and full fledge GPU accelerator. This study uses two algorithms: Wavelet Spectral Dimension Reduction of Hyperspectral Imagery and Automated Cloud-Cover Assessment (ACCA) Algorithm. Tegra K1 achieved 51 for ACCA algorithm and 20 for the dimension reduction algorithm, as compared to the performance of the high-end 8-core server Intel Xeon CPU with 13.5 times higher power consumption.

  17. Metal implants on CT: comparison of iterative reconstruction algorithms for reduction of metal artifacts with single energy and spectral CT scanning in a phantom model.

    PubMed

    Fang, Jieming; Zhang, Da; Wilcox, Carol; Heidinger, Benedikt; Raptopoulos, Vassilios; Brook, Alexander; Brook, Olga R

    2017-03-01

    To assess single energy metal artifact reduction (SEMAR) and spectral energy metal artifact reduction (MARS) algorithms in reducing artifacts generated by different metal implants. Phantom was scanned with and without SEMAR (Aquilion One, Toshiba) and MARS (Discovery CT750 HD, GE), with various metal implants. Images were evaluated objectively by measuring standard deviation in regions of interests and subjectively by two independent reviewers grading on a scale of 0 (no artifact) to 4 (severe artifact). Reviewers also graded new artifacts introduced by metal artifact reduction algorithms. SEMAR and MARS significantly decreased variability of the density measurement adjacent to the metal implant, with median SD (standard deviation of density measurement) of 52.1 HU without SEMAR, vs. 12.3 HU with SEMAR, p < 0.001. Median SD without MARS of 63.1 HU decreased to 25.9 HU with MARS, p < 0.001. Median SD with SEMAR is significantly lower than median SD with MARS (p = 0.0011). SEMAR improved subjective image quality with reduction in overall artifacts grading from 3.2 ± 0.7 to 1.4 ± 0.9, p < 0.001. Improvement of overall image quality by MARS has not reached statistical significance (3.2 ± 0.6 to 2.6 ± 0.8, p = 0.088). There was a significant introduction of artifacts introduced by metal artifact reduction algorithm for MARS with 2.4 ± 1.0, but minimal with SEMAR 0.4 ± 0.7, p < 0.001. CT iterative reconstruction algorithms with single and spectral energy are both effective in reduction of metal artifacts. Single energy-based algorithm provides better overall image quality than spectral CT-based algorithm. Spectral metal artifact reduction algorithm introduces mild to moderate artifacts in the far field.

  18. Prospects for Advanced Tokamak Operation of ITER

    NASA Astrophysics Data System (ADS)

    Neilson, George H.

    1996-11-01

    Previous studies have identified steady-state (or "advanced") modes for ITER, based on reverse-shear profiles and significant bootstrap current. A typical example has 12 MA of plasma current, 1,500 MW of fusion power, and 100 MW of heating and current-drive power. The implementation of these and other steady-state operating scenarios in the ITER device is examined in order to identify key design modifications that can enhance the prospects for successfully achieving advanced tokamak operating modes in ITER compatible with a single null divertor design. In particular, we examine plasma configurations that can be achieved by the ITER poloidal field system with either a monolithic central solenoid (as in the ITER Interim Design), or an alternate "hybrid" central solenoid design which provides for greater flexibility in the plasma shape. The increased control capability and expanded operating space provided by the hybrid central solenoid allows operation at high triangularity (beneficial for improving divertor performance through control of edge-localized modes and for increasing beta limits), and will make it much easier for ITER operators to establish an optimum startup trajectory leading to a high-performance, steady-state scenario. Vertical position control is examined because plasmas made accessible by the hybrid central solenoid can be more elongated and/or less well coupled to the conducting structure. Control of vertical-displacements using the external PF coils remains feasible over much of the expanded operating space. Further work is required to define the full spectrum of axisymmetric plasma disturbances requiring active control In addition to active axisymmetric control, advanced tokamak modes in ITER may require active control of kink modes on the resistive time scale of the conducting structure. This might be accomplished in ITER through the use of active control coils external to the vacuum vessel which are actuated by magnetic sensors near the first wall. The enhanced shaping and positioning flexibility provides a range of options for reducing the ripple-induced losses of fast alpha particles--a major limitation on ITER steady-state modes. An alternate approach that we are pursuing in parallel is the inclusion of ferromagnetic inserts to reduce the toroidal field ripple within the plasma chamber. The inclusion of modest design changes such as the hybrid central solenoid, active control coils for kink modes, and ferromagnetic inserts for TF ripple reduction show can greatly increase the flexibility to accommodate advance tokamak operation in ITER. Increased flexibility is important because the optimum operating scenario for ITER cannot be predicted with certainty. While low-inductance, reverse shear modes appear attractive for steady-state operation, high-inductance, high-beta modes are also viable candidates, and it is important that ITER have the flexibility to explore both these, and other, operating regimes.

  19. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E-plane end-fire direction. Because of the alternating slot offsets, grating lobes called butterfly lobes are produced in non-principal planes close to the H-plane. An attempt to reduce the influence of such grating lobes resulted in a symmetric design.

  20. Imaging complex objects using learning tomography

    NASA Astrophysics Data System (ADS)

    Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri

    2018-02-01

    Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.

  1. Control advances for achieving the ITER baseline scenario on KSTAR

    NASA Astrophysics Data System (ADS)

    Eidietis, N. W.; Barr, J.; Hahn, S. H.; Humphreys, D. A.; in, Y. K.; Jeon, Y. M.; Lanctot, M. J.; Mueller, D.; Walker, M. L.

    2017-10-01

    Control methodologies developed to enable successful production of ITER baseline scenario (IBS) plasmas on the superconducting KSTAR tokamak are presented: decoupled vertical control (DVC), real-time feedforward (rtFF) calculation, and multi-input multi-output (MIMO) X-point control. DVC provides fast vertical control with the in-vessel control coils (IVCC) while sharing slow vertical control with the poloidal field (PF) coils to avoid IVCC saturation. rtFF compensates for inaccuracies in offline PF current feedforward programming, allowing reduction or removal of integral gain (and its detrimental phase lag) from the shape controller. Finally, MIMO X-point control provides accurate positioning of the X-point despite low controllability due to the large distance between coils and plasma. Combined, these techniques enabled achievement of IBS parameters (q95 = 3.2, βN = 2) with a scaled ITER shape on KSTAR. n =2 RMP response displays a strong dependence upon this shaping. Work supported by the US DOE under Award DE-SC0010685 and the KSTAR project.

  2. A pseudo-discrete algebraic reconstruction technique (PDART) prior image-based suppression of high density artifacts in computed tomography

    NASA Astrophysics Data System (ADS)

    Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong

    2016-12-01

    We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.

  3. A Decomposition Method for Security Constrained Economic Dispatch of a Three-Layer Power System

    NASA Astrophysics Data System (ADS)

    Yang, Junfeng; Luo, Zhiqiang; Dong, Cheng; Lai, Xiaowen; Wang, Yang

    2018-01-01

    This paper proposes a new decomposition method for the security-constrained economic dispatch in a three-layer large-scale power system. The decomposition is realized using two main techniques. The first is to use Ward equivalencing-based network reduction to reduce the number of variables and constraints in the high-layer model without sacrificing accuracy. The second is to develop a price response function to exchange signal information between neighboring layers, which significantly improves the information exchange efficiency of each iteration and results in less iterations and less computational time. The case studies based on the duplicated RTS-79 system demonstrate the effectiveness and robustness of the proposed method.

  4. Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.

    PubMed

    Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal

    2013-11-01

    In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Conservative tightly-coupled simulations of stochastic multiscale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taverniers, Søren; Pigarov, Alexander Y.; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu

    2016-05-15

    Multiphysics problems often involve components whose macroscopic dynamics is driven by microscopic random fluctuations. The fidelity of simulations of such systems depends on their ability to propagate these random fluctuations throughout a computational domain, including subdomains represented by deterministic solvers. When the constituent processes take place in nonoverlapping subdomains, system behavior can be modeled via a domain-decomposition approach that couples separate components at the interfaces between these subdomains. Its coupling algorithm has to maintain a stable and efficient numerical time integration even at high noise strength. We propose a conservative domain-decomposition algorithm in which tight coupling is achieved by employingmore » either Picard's or Newton's iterative method. Coupled diffusion equations, one of which has a Gaussian white-noise source term, provide a computational testbed for analysis of these two coupling strategies. Fully-converged (“implicit”) coupling with Newton's method typically outperforms its Picard counterpart, especially at high noise levels. This is because the number of Newton iterations scales linearly with the amplitude of the Gaussian noise, while the number of Picard iterations can scale superlinearly. At large time intervals between two subsequent inter-solver communications, the solution error for single-iteration (“explicit”) Picard's coupling can be several orders of magnitude higher than that for implicit coupling. Increasing the explicit coupling's communication frequency reduces this difference, but the resulting increase in computational cost can make it less efficient than implicit coupling at similar levels of solution error, depending on the communication frequency of the latter and the noise strength. This trend carries over into higher dimensions, although at high noise strength explicit coupling may be the only computationally viable option.« less

  6. Constructing Integrable Full-pressure Full-current Free-boundary Stellarator Magnetohydrodynamic Equilibria

    NASA Astrophysics Data System (ADS)

    Hudson, S. R.; Monticello, D. A.; Reiman, A. H.; Strickler, D. J.; Hirshman, S. P.

    2003-06-01

    For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands are guaranteed to exist. Magnetic islands break the smooth topology of nested flux surfaces and chaotic field lines result when magnetic islands overlap. An analogous case occurs with 11/2-dimension Hamiltonian systems where resonant perturbations cause singularities in the transformation to action-angle coordinates and destroy integrability. The suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Techniques for `healing' vacuum fields and fixed-boundary plasma equilibria have been developed, but what is ultimately required is a procedure for designing stellarators such that the self-consistent plasma equilibrium currents and the coil currents combine to produce an integrable magnetic field, and such a procedure is presented here for the first time. Magnetic islands in free-boundary full-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver [A.H.Reiman & H.S.Greenside, Comp. Phys. Comm., 43:157, 1986.] which iterates the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible. The method is applied to a candidate plasma and coil design for the National Compact Stellarator eXperiment [G.H.Neilson et.al., Phys. Plas., 7:1911, 2000.].

  7. Reduction of effective dose and organ dose to the eye lens in head MDCT using iterative image reconstruction and automatic tube current modulation.

    PubMed

    Ryska, Pavel; Kvasnicka, Tomas; Jandura, Jiri; Klzo, Ludovit; Grepl, Jakub; Zizka, Jan

    2014-06-01

    To compare the effective and eye lens radiation dose in helical MDCT brain examinations using automatic tube current modulation in conjunction with either standard filtered back projection (FBP) technique or iterative reconstruction in image space (IRIS). Of 400 adult brain MDCT examinations, 200 were performed using FBP and 200 using IRIS with the following parameters: tube voltage 120 kV, rotation period 1 second, pitch factor 0.55, automatic tube current modulation in both transverse and longitudinal planes with reference mAs 300 (FBP) and 200 (IRIS). Doses were calculated from CT dose index and dose length product values utilising ImPACT software; the organ dose to the lens was derived from the actual tube current-time product value applied to the lens. Image quality was assessed by two independent readers blinded to the type of image reconstruction technique. The average effective scan dose was 1.47±0.26 mSv (FBP) and 0.98±0.15 mSv (IRIS), respectively (33.3% decrease). The average organ dose to the eye lens decreased from 40.0±3.3 mGy (FBP) to 26.6±2.0 mGy (IRIS, 33.5% decrease). No significant change in diagnostic image quality was noted between IRIS and FBP scans (P=0.17). Iterative reconstruction of cerebral MDCT examinations enables reduction of both effective and organ eye lens dose by one third without signficant loss of image quality.

  8. Voting strategy for artifact reduction in digital breast tomosynthesis.

    PubMed

    Wu, Tao; Moore, Richard H; Kopans, Daniel B

    2006-07-01

    Artifacts are observed in digital breast tomosynthesis (DBT) reconstructions due to the small number of projections and the narrow angular range that are typically employed in tomosynthesis imaging. In this work, we investigate the reconstruction artifacts that are caused by high-attenuation features in breast and develop several artifact reduction methods based on a "voting strategy." The voting strategy identifies the projection(s) that would introduce artifacts to a voxel and rejects the projection(s) when reconstructing the voxel. Four approaches to the voting strategy were compared, including projection segmentation, maximum contribution deduction, one-step classification, and iterative classification. The projection segmentation method, based on segmentation of high-attenuation features from the projections, effectively reduces artifacts caused by metal and large calcifications that can be reliably detected and segmented from projections. The other three methods are based on the observation that contributions from artifact-inducing projections have higher value than those from normal projections. These methods attempt to identify the projection(s) that would cause artifacts by comparing contributions from different projections. Among the three methods, the iterative classification method provides the best artifact reduction; however, it can generate many false positive classifications that degrade the image quality. The maximum contribution deduction method and one-step classification method both reduce artifacts well from small calcifications, although the performance of artifact reduction is slightly better with the one-step classification. The combination of one-step classification and projection segmentation removes artifacts from both large and small calcifications.

  9. When Homoplasy Is Not Homoplasy: Dissecting Trait Evolution by Contrasting Composite and Reductive Coding.

    PubMed

    Torres-Montúfar, Alejandro; Borsch, Thomas; Ochoterena, Helga

    2018-05-01

    The conceptualization and coding of characters is a difficult issue in phylogenetic systematics, no matter which inference method is used when reconstructing phylogenetic trees or if the characters are just mapped onto a specific tree. Complex characters are groups of features that can be divided into simpler hierarchical characters (reductive coding), although the implied hierarchical relational information may change depending on the type of coding (composite vs. reductive). Up to now, there is no common agreement to either code characters as complex or simple. Phylogeneticists have discussed which coding method is best but have not incorporated the heuristic process of reciprocal illumination to evaluate the coding. Composite coding allows to test whether 1) several characters were linked resulting in a structure described as a complex character or trait or 2) independently evolving characters resulted in the configuration incorrectly interpreted as a complex character. We propose that complex characters or character states should be decomposed iteratively into simpler characters when the original homology hypothesis is not corroborated by a phylogenetic analysis, and the character or character state is retrieved as homoplastic. We tested this approach using the case of fruit types within subfamily Cinchonoideae (Rubiaceae). The iterative reductive coding of characters associated with drupes allowed us to unthread fruit evolution within Cinchonoideae. Our results show that drupes and berries are not homologous. As a consequence, a more precise ontology for the Cinchonoideae drupes is required.

  10. Domain-wall excitations in the two-dimensional Ising spin glass

    NASA Astrophysics Data System (ADS)

    Khoshbakht, Hamid; Weigel, Martin

    2018-02-01

    The Ising spin glass in two dimensions exhibits rich behavior with subtle differences in the scaling for different coupling distributions. We use recently developed mappings to graph-theoretic problems together with highly efficient implementations of combinatorial optimization algorithms to determine exact ground states for systems on square lattices with up to 10 000 ×10 000 spins. While these mappings only work for planar graphs, for example for systems with periodic boundary conditions in at most one direction, we suggest here an iterative windowing technique that allows one to determine ground states for fully periodic samples up to sizes similar to those for the open-periodic case. Based on these techniques, a large number of disorder samples are used together with a careful finite-size scaling analysis to determine the stiffness exponents and domain-wall fractal dimensions with unprecedented accuracy, our best estimates being θ =-0.2793 (3 ) and df=1.273 19 (9 ) for Gaussian couplings. For bimodal disorder, a new uniform sampling algorithm allows us to study the domain-wall fractal dimension, finding df=1.279 (2 ) . Additionally, we also investigate the distributions of ground-state energies, of domain-wall energies, and domain-wall lengths.

  11. Conceptualizing Couples’ Decision Making in PGD: Emerging Cognitive, Emotional, and Moral Dimensions

    PubMed Central

    Hershberger, Patricia E.; Pierce, Penny F.

    2009-01-01

    Objective To illuminate and synthesize what is known about the underlying decision making processes surrounding couples’ preimplantation genetic diagnosis (PGD) use or disuse and to formulate an initial conceptual framework that can guide future research and practice. Methods This systematic review targeted empirical studies published in English from 1990 to 2008 that examined the decision making process of couples or individual partners that had used, were eligible for, or had contemplated PGD. Sixteen studies met the eligibility requirements. To provide a more comprehensive review, empirical studies that examined healthcare professionals’ perceptions of couples’ decision making surrounding PGD use and key publications from a variety of disciplines supplemented the analysis. Results The conceptual framework formulated from the review demonstrates that couples’ PGD decision making is composed of three iterative and dynamic dimensions: cognitive appraisals, emotional responses, and moral judgments. Conclusion Couples think critically about uncertain and probabilistic information, grapple with conflicting emotions and incorporate moral perspectives into their decision making about whether or not to use PGD. Practice Implications The quality of care and decisional support for couples who are contemplating PGD use can be improved by incorporating focused questions and discussion from each of the dimensions into counseling sessions. PMID:20060677

  12. Defining Malaysian Knowledge Society: Results from the Delphi Technique

    NASA Astrophysics Data System (ADS)

    Hamid, Norsiah Abdul; Zaman, Halimah Badioze

    This paper outlines the findings of research where the central idea is to define the term Knowledge Society (KS) in Malaysian context. The research focuses on three important dimensions, namely knowledge, ICT and human capital. This study adopts a modified Delphi technique to seek the important dimensions that can contribute to the development of Malaysian's KS. The Delphi technique involved ten experts in a five-round iterative and controlled feedback procedure to obtain consensus on the important dimensions and to verify the proposed definition of KS. The finding shows that all three dimensions proposed initially scored high and moderate consensus. Round One (R1) proposed an initial definition of KS and required comments and inputs from the panel. These inputs were then used to develop items for a R2 questionnaire. In R2, 56 out of 73 items scored high consensus and in R3, 63 out of 90 items scored high. R4 was conducted to re-rate the new items, in which 8 out of 17 items scored high. Other items scored moderate consensus and no item scored low or no consensus in all rounds. The final round (R5) was employed to verify the final definition of KS. Findings and discovery of this study are significant to the definition of KS and the development of a framework in the Malaysian context.

  13. Computational procedures for evaluating the sensitivity derivatives of vibration frequencies and Eigenmodes of framed structures

    NASA Technical Reports Server (NTRS)

    Fetterman, Timothy L.; Noor, Ahmed K.

    1987-01-01

    Computational procedures are presented for evaluating the sensitivity derivatives of the vibration frequencies and eigenmodes of framed structures. Both a displacement and a mixed formulation are used. The two key elements of the computational procedure are: (a) Use of dynamic reduction techniques to substantially reduce the number of degrees of freedom; and (b) Application of iterative techniques to improve the accuracy of the derivatives of the eigenmodes. The two reduction techniques considered are the static condensation and a generalized dynamic reduction technique. Error norms are introduced to assess the accuracy of the eigenvalue and eigenvector derivatives obtained by the reduction techniques. The effectiveness of the methods presented is demonstrated by three numerical examples.

  14. A path to stable low-torque plasma operation in ITER with test blanket modules

    DOE PAGES

    Lanctot, Matthew J.; Snipes, J. A.; Reimerdes, H.; ...

    2016-12-12

    New experiments in the low-torque ITER Q = 10 scenario on DIII-D demonstrate that n = 1 magnetic fields from a single row of ex-vessel control coils enable operation at ITER performance metrics in the presence of applied non-axisymmetric magnetic fields from a test blanket module (TBM) mock-up coil. With n = 1 compensation, operation below the ITER-equivalent injected torque is successful at three times the ITER equivalent toroidal magnetic field ripple for a pair of TBMs in one equatorial port, whereas the uncompensated TBM field leads to rotation collapse, loss of H-mode and plasma current disruption. In companion experimentsmore » at high plasma beta, where the n = 1 plasma response is enhanced, uncorrected TBM fields degrade energy confinement and the plasma angular momentum while increasing fast ion losses; however, disruptions are not routinely encountered owing to increased levels of injected neutral beam torque. In this regime, n = 1 field compensation leads to recovery of a dominant fraction of the TBM-induced plasma pressure and rotation degradation, and an 80% reduction in the heat load to the first wall. These results show that the n = 1 plasma response plays a dominant role in determining plasma stability, and that n = 1 field compensation alone not only recovers most of the impact on plasma performance of the TBM, but also protects the first wall from potentially damaging heat flux. Despite these benefits, plasma rotation braking from the TBM fields cannot be fully recovered using standard error field control. Lastly, given the uncertainty in extrapolation of these results to the ITER configuration, it is prudent to design the TBMs with as low a ferromagnetic mass as possible without jeopardizing the TBM mission.« less

  15. Iterative updating of model error for Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew

    2018-02-01

    In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.

  16. A path to stable low-torque plasma operation in ITER with test blanket modules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lanctot, Matthew J.; Snipes, J. A.; Reimerdes, H.

    New experiments in the low-torque ITER Q = 10 scenario on DIII-D demonstrate that n = 1 magnetic fields from a single row of ex-vessel control coils enable operation at ITER performance metrics in the presence of applied non-axisymmetric magnetic fields from a test blanket module (TBM) mock-up coil. With n = 1 compensation, operation below the ITER-equivalent injected torque is successful at three times the ITER equivalent toroidal magnetic field ripple for a pair of TBMs in one equatorial port, whereas the uncompensated TBM field leads to rotation collapse, loss of H-mode and plasma current disruption. In companion experimentsmore » at high plasma beta, where the n = 1 plasma response is enhanced, uncorrected TBM fields degrade energy confinement and the plasma angular momentum while increasing fast ion losses; however, disruptions are not routinely encountered owing to increased levels of injected neutral beam torque. In this regime, n = 1 field compensation leads to recovery of a dominant fraction of the TBM-induced plasma pressure and rotation degradation, and an 80% reduction in the heat load to the first wall. These results show that the n = 1 plasma response plays a dominant role in determining plasma stability, and that n = 1 field compensation alone not only recovers most of the impact on plasma performance of the TBM, but also protects the first wall from potentially damaging heat flux. Despite these benefits, plasma rotation braking from the TBM fields cannot be fully recovered using standard error field control. Lastly, given the uncertainty in extrapolation of these results to the ITER configuration, it is prudent to design the TBMs with as low a ferromagnetic mass as possible without jeopardizing the TBM mission.« less

  17. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    NASA Astrophysics Data System (ADS)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  18. DART system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.

    2005-08-01

    The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less

  19. New Dimensions for the Multicultural Education Course

    ERIC Educational Resources Information Center

    Gay, Richard

    2011-01-01

    For the past sixteen years, the Five Dimensions of Multicultural Education, as proposed by James A. Banks (1995), have been accepted in many circles as the primary conceptual framework used in teaching multicultural education courses: content integration, the knowledge construction process, prejudice reduction, an equity pedagogy and an empowering…

  20. A general soft label based linear discriminant analysis for semi-supervised dimensionality reduction.

    PubMed

    Zhao, Mingbo; Zhang, Zhao; Chow, Tommy W S; Li, Bing

    2014-07-01

    Dealing with high-dimensional data has always been a major problem in research of pattern recognition and machine learning, and Linear Discriminant Analysis (LDA) is one of the most popular methods for dimension reduction. However, it only uses labeled samples while neglecting unlabeled samples, which are abundant and can be easily obtained in the real world. In this paper, we propose a new dimension reduction method, called "SL-LDA", by using unlabeled samples to enhance the performance of LDA. The new method first propagates label information from the labeled set to the unlabeled set via a label propagation process, where the predicted labels of unlabeled samples, called "soft labels", can be obtained. It then incorporates the soft labels into the construction of scatter matrixes to find a transformed matrix for dimension reduction. In this way, the proposed method can preserve more discriminative information, which is preferable when solving the classification problem. We further propose an efficient approach for solving SL-LDA under a least squares framework, and a flexible method of SL-LDA (FSL-LDA) to better cope with datasets sampled from a nonlinear manifold. Extensive simulations are carried out on several datasets, and the results show the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Fear of Fear and Broad Dimensions of Psychopathology over the Course of Cognitive Behavioural Therapy for Panic Disorder with Agoraphobia in Japan.

    PubMed

    Ogawa, S; Kondo, M; Ino, K; Ii, T; Imai, R; Furukawa, T A; Akechi, T

    2017-12-01

    To examine the relationship of fear of fear and broad dimensions of psychopathology in panic disorder with agoraphobia over the course of cognitive behavioural therapy in Japan. A total of 177 Japanese patients with panic disorder with agoraphobia were treated with group cognitive behavioural therapy between 2001 and 2015. We examined associations between the change scores in Agoraphobic Cognitions Questionnaire or Body Sensations Questionnaire and the changes in subscales of Symptom Checklist-90 Revised during cognitive behavioural therapy controlling the change in panic disorder severity using multiple regression analysis. Reduction in Agoraphobic Cognitions Questionnaire score was related to a decrease in all Symptom Checklist-90 Revised (SCL-90-R) subscale scores. Reduction in Body Sensations Questionnaire score was associated with a decrease in anxiety. Reduction in Panic Disorder Severity Scale score was not related to any SCL-90-R subscale changes. Changes in fear of fear, especially maladaptive cognitions, may predict broad dimensions of psychopathology reductions in patients of panic disorder with agoraphobia over the course of cognitive behavioural therapy. For the sake of improving a broader range of psychiatric symptoms in patients of panic disorder with agoraphobia, more attention to maladaptive cognition changes during cognitive behavioural therapy is warranted.

  2. Structured Ordinary Least Squares: A Sufficient Dimension Reduction approach for regressions with partitioned predictors and heterogeneous units.

    PubMed

    Liu, Yang; Chiaromonte, Francesca; Li, Bing

    2017-06-01

    In many scientific and engineering fields, advanced experimental and computing technologies are producing data that are not just high dimensional, but also internally structured. For instance, statistical units may have heterogeneous origins from distinct studies or subpopulations, and features may be naturally partitioned based on experimental platforms generating them, or on information available about their roles in a given phenomenon. In a regression analysis, exploiting this known structure in the predictor dimension reduction stage that precedes modeling can be an effective way to integrate diverse data. To pursue this, we propose a novel Sufficient Dimension Reduction (SDR) approach that we call structured Ordinary Least Squares (sOLS). This combines ideas from existing SDR literature to merge reductions performed within groups of samples and/or predictors. In particular, it leads to a version of OLS for grouped predictors that requires far less computation than recently proposed groupwise SDR procedures, and provides an informal yet effective variable selection tool in these settings. We demonstrate the performance of sOLS by simulation and present a first application to genomic data. The R package "sSDR," publicly available on CRAN, includes all procedures necessary to implement the sOLS approach. © 2016, The International Biometric Society.

  3. The adaptive statistical iterative reconstruction-V technique for radiation dose reduction in abdominal CT: comparison with the adaptive statistical iterative reconstruction technique.

    PubMed

    Kwon, Heejin; Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun

    2015-10-01

    To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. 27 consecutive patients (mean body mass index: 23.55 kg m(-2) underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19-49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. This study represents the first clinical research experiment to use ASIR-V, the newest version of iterative reconstruction. Use of the ASIR-V algorithm decreased image noise and increased image quality when compared with the ASIR and FBP methods. These results suggest that high-quality low-dose CT may represent a new clinical option.

  4. The adaptive statistical iterative reconstruction-V technique for radiation dose reduction in abdominal CT: comparison with the adaptive statistical iterative reconstruction technique

    PubMed Central

    Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun

    2015-01-01

    Objective: To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. Methods: 27 consecutive patients (mean body mass index: 23.55 kg m−2 underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. Results: At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19–49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Conclusion: Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. Advances in knowledge: This study represents the first clinical research experiment to use ASIR-V, the newest version of iterative reconstruction. Use of the ASIR-V algorithm decreased image noise and increased image quality when compared with the ASIR and FBP methods. These results suggest that high-quality low-dose CT may represent a new clinical option. PMID:26234823

  5. Influence of Ultra-Low-Dose and Iterative Reconstructions on the Visualization of Orbital Soft Tissues on Maxillofacial CT.

    PubMed

    Widmann, G; Juranek, D; Waldenberger, F; Schullian, P; Dennhardt, A; Hoermann, R; Steurer, M; Gassner, E-M; Puelacher, W

    2017-08-01

    Dose reduction on CT scans for surgical planning and postoperative evaluation of midface and orbital fractures is an important concern. The purpose of this study was to evaluate the variability of various low-dose and iterative reconstruction techniques on the visualization of orbital soft tissues. Contrast-to-noise ratios of the optic nerve and inferior rectus muscle and subjective scores of a human cadaver were calculated from CT with a reference dose protocol (CT dose index volume = 36.69 mGy) and a subsequent series of low-dose protocols (LDPs I-4: CT dose index volume = 4.18, 2.64, 0.99, and 0.53 mGy) with filtered back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR)-50, ASIR-100, and model-based iterative reconstruction. The Dunn Multiple Comparison Test was used to compare each combination of protocols (α = .05). Compared with the reference dose protocol with FBP, the following statistically significant differences in contrast-to-noise ratios were shown (all, P ≤ .012) for the following: 1) optic nerve: LDP-I with FBP; LDP-II with FBP and ASIR-50; LDP-III with FBP, ASIR-50, and ASIR-100; and LDP-IV with FBP, ASIR-50, and ASIR-100; and 2) inferior rectus muscle: LDP-II with FBP, LDP-III with FBP and ASIR-50, and LDP-IV with FBP, ASIR-50, and ASIR-100. Model-based iterative reconstruction showed the best contrast-to-noise ratio in all images and provided similar subjective scores for LDP-II. ASIR-50 had no remarkable effect, and ASIR-100, a small effect on subjective scores. Compared with a reference dose protocol with FBP, model-based iterative reconstruction may show similar diagnostic visibility of orbital soft tissues at a CT dose index volume of 2.64 mGy. Low-dose technology and iterative reconstruction technology may redefine current reference dose levels in maxillofacial CT. © 2017 by American Journal of Neuroradiology.

  6. Combined iterative reconstruction and image-domain decomposition for dual energy CT using total-variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu

    2014-05-15

    Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less

  7. Organizing symmetry-protected topological phases by layering and symmetry reduction: A minimalist perspective

    NASA Astrophysics Data System (ADS)

    Xiong, Charles Zhaoxi; Alexandradinata, A.

    2018-03-01

    It is demonstrated that fermionic/bosonic symmetry-protected topological (SPT) phases across different dimensions and symmetry classes can be organized using geometric constructions that increase dimensions and symmetry-reduction maps that change symmetry groups. Specifically, it is shown that the interacting classifications of SPT phases with and without glide symmetry fit into a short exact sequence, so that the classification with glide is constrained to be a direct sum of cyclic groups of order 2 or 4. Applied to fermionic SPT phases in the Wigner-Dyson class AII, this implies that the complete interacting classification in the presence of glide is Z4⊕Z2⊕Z2 in three dimensions. In particular, the hourglass-fermion phase recently realized in the band insulator KHgSb must be robust to interactions. Generalizations to spatiotemporal glide symmetries are discussed.

  8. The attractor dimension of solar decimetric radio pulsations

    NASA Technical Reports Server (NTRS)

    Kurths, J.; Benz, A. O.; Aschwanden, M. J.

    1991-01-01

    The temporal characteristics of decimetric pulsations and related radio emissions during solar flares are analyzed using statistical methods recently developed for nonlinear dynamic systems. The results of the analysis is consistent with earlier reports on low-dimensional attractors of such events and yield a quantitative description of their temporal characteristics and hidden order. The estimated dimensions of typical decimetric pulsations are generally in the range of 3.0 + or - 0.5. Quasi-periodic oscillations and sudden reductions may have dimensions as low as 2. Pulsations of decimetric type IV continua have typically a dimension of about 4.

  9. Iterative management of heat early warning systems in a changing climate.

    PubMed

    Hess, Jeremy J; Ebi, Kristie L

    2016-10-01

    Extreme heat is a leading weather-related cause of morbidity and mortality, with heat exposure becoming more widespread, frequent, and intense as climates change. The use of heat early warning and response systems (HEWSs) that integrate weather forecasts with risk assessment, communication, and reduction activities is increasingly widespread. HEWSs are frequently touted as an adaptation to climate change, but little attention has been paid to the question of how best to ensure effectiveness of HEWSs as climates change further. In this paper, we discuss findings showing that HEWSs satisfy the tenets of an intervention that facilitates adaptation, but climate change poses challenges infrequently addressed in heat action plans, particularly changes in the onset, duration, and intensity of dangerously warm temperatures, and changes over time in the relationships between temperature and health outcomes. Iterative management should be central to a HEWS, and iteration cycles should be of 5 years or less. Climate change adaptation and implementation science research frameworks can be used to identify HEWS modifications to improve their effectiveness as temperature continues to rise, incorporating scientific insights and new understanding of effective interventions. We conclude that, at a minimum, iterative management activities should involve planned reassessment at least every 5 years of hazard distribution, population-level vulnerability, and HEWS effectiveness. © 2016 New York Academy of Sciences.

  10. Robots and service innovation in health care.

    PubMed

    Oborn, Eivor; Barrett, Michael; Darzi, Ara

    2011-01-01

    Robots have long captured our imagination and are being used increasingly in health care. In this paper we summarize, organize and criticize the health care robotics literature and highlight how the social and technical elements of robots iteratively influence and redefine each other. We suggest the need for increased emphasis on sociological dimensions of using robots, recognizing how social and work relations are restructured during changes in practice. Further, we propose the usefulness of a 'service logic' in providing insight as to how robots can influence health care innovation. The Royal Society of Medicine Press Ltd 2011.

  11. A computer program for analyzing unresolved Mossbauer hyperfine spectra

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Singh, J. J.

    1978-01-01

    The program for analyzing unresolved Mossbauer hyperfine spectra was written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system 1.1. With the present dimensions, the program requires approximately 36,000 octal locations of core storage. A typical case involving two innermost coordination shells in which the amplitudes and the peak positions of all three components were estimated in 25 iterations requires 30 seconds on CYBER 173. The program was applied to determine the effects of various near neighbor impurity shells on hyperfine fields in dilute FeAl alloys.

  12. Globally convergent techniques in nonlinear Newton-Krylov

    NASA Technical Reports Server (NTRS)

    Brown, Peter N.; Saad, Youcef

    1989-01-01

    Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.

  13. Space-time adaptive solution of inverse problems with the discrete adjoint method

    NASA Astrophysics Data System (ADS)

    Alexe, Mihai; Sandu, Adrian

    2014-08-01

    This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.

  14. Low-dose CT imaging of a total hip arthroplasty phantom using model-based iterative reconstruction and orthopedic metal artifact reduction.

    PubMed

    Wellenberg, R H H; Boomsma, M F; van Osch, J A C; Vlassenbroek, A; Milles, J; Edens, M A; Streekstra, G J; Slump, C H; Maas, M

    2017-05-01

    To compare quantitative measures of image quality, in terms of CT number accuracy, noise, signal-to-noise-ratios (SNRs), and contrast-to-noise ratios (CNRs), at different dose levels with filtered-back-projection (FBP), iterative reconstruction (IR), and model-based iterative reconstruction (MBIR) alone and in combination with orthopedic metal artifact reduction (O-MAR) in a total hip arthroplasty (THA) phantom. Scans were acquired from high- to low-dose (CTDI vol : 40.0, 32.0, 24.0, 16.0, 8.0, and 4.0 mGy) at 120- and 140- kVp. Images were reconstructed using FBP, IR (iDose 4 level 2, 4, and 6) and MBIR (IMR, level 1, 2, and 3) with and without O-MAR. CT number accuracy in Hounsfield Units (HU), noise or standard deviation, SNRs, and CNRs were analyzed. The IMR technique showed lower noise levels (p < 0.01), higher SNRs (p < 0.001) and CNRs (p < 0.001) compared with FBP and iDose 4 in all acquisitions from high- to low-dose with constant CT numbers. O-MAR reduced noise (p < 0.01) and improved SNRs (p < 0.01) and CNRs (p < 0.001) while improving CT number accuracy only at a low dose. At the low dose of 4.0 mGy, IMR level 1, 2, and 3 showed 83%, 89%, and 95% lower noise values, a factor 6.0, 9.2, and 17.9 higher SNRs, and 5.7, 8.8, and 18.2 higher CNRs compared with FBP respectively. Based on quantitative analysis of CT number accuracy, noise values, SNRs, and CNRs, we conclude that the combined use of IMR and O-MAR enables a reduction in radiation dose of 83% compared with FBP and iDose 4 in the CT imaging of a THA phantom.

  15. Analysis of Radiation Transport Due to Activated Coolant in the ITER Neutral Beam Injection Cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royston, Katherine; Wilson, Stephen C.; Risner, Joel M.

    Detailed spatial distributions of the biological dose rate due to a variety of sources are required for the design of the ITER tokamak facility to ensure that all radiological zoning limits are met. During operation, water in the Integrated loop of Blanket, Edge-localized mode and vertical stabilization coils, and Divertor (IBED) cooling system will be activated by plasma neutrons and will flow out of the bioshield through a complex system of pipes and heat exchangers. This paper discusses the methods used to characterize the biological dose rate outside the tokamak complex due to 16N gamma radiation emitted by the activatedmore » coolant in the Neutral Beam Injection (NBI) cell of the tokamak building. Activated coolant will enter the NBI cell through the IBED Primary Heat Transfer System (PHTS), and the NBI PHTS will also become activated due to radiation streaming through the NBI system. To properly characterize these gamma sources, the production of 16N, the decay of 16N, and the flow of activated water through the coolant loops were modeled. The impact of conservative approximations on the solution was also examined. Once the source due to activated coolant was calculated, the resulting biological dose rate outside the north wall of the NBI cell was determined through the use of sophisticated variance reduction techniques. The AutomateD VAriaNce reducTion Generator (ADVANTG) software implements methods developed specifically to provide highly effective variance reduction for complex radiation transport simulations such as those encountered with ITER. Using ADVANTG with the Monte Carlo N-particle (MCNP) radiation transport code, radiation responses were calculated on a fine spatial mesh with a high degree of statistical accuracy. In conclusion, advanced visualization tools were also developed and used to determine pipe cell connectivity, to facilitate model checking, and to post-process the transport simulation results.« less

  16. Diagnostic accuracy of 256-row multidetector CT coronary angiography with prospective ECG-gating combined with fourth-generation iterative reconstruction algorithm in the assessment of coronary artery bypass: evaluation of dose reduction and image quality.

    PubMed

    Ippolito, Davide; Fior, Davide; Franzesi, Cammillo Talei; Riva, Luca; Casiraghi, Alessandra; Sironi, Sandro

    2017-12-01

    Effective radiation dose in coronary CT angiography (CTCA) for coronary artery bypass graft (CABG) evaluation is remarkably high because of long scan lengths. Prospective electrocardiographic gating with iterative reconstruction can reduce effective radiation dose. To evaluate the diagnostic performance of low-kV CT angiography protocol with prospective ecg-gating technique and iterative reconstruction (IR) algorithm in follow-up of CABG patients compared with standard retrospective protocol. Seventy-four non-obese patients with known coronary disease treated with artery bypass grafting were prospectively enrolled. All the patients underwent 256 MDCT (Brilliance iCT, Philips) CTCA using low-dose protocol (100 kV; 800 mAs; rotation time: 0.275 s) combined with prospective ECG-triggering acquisition and fourth-generation IR technique (iDose 4 ; Philips); all the lengths of the bypass graft were included in the evaluation. A control group of 42 similar patients was evaluated with a standard retrospective ECG-gated CTCA (100 kV; 800 mAs).On both CT examinations, ROIs were placed to calculate standard deviation of pixel values and intra-vessel density. Diagnostic quality was also evaluated using a 4-point quality scale. Despite the statistically significant reduction of radiation dose evaluated with DLP (study group mean DLP: 274 mGy cm; control group mean DLP: 1224 mGy cm; P value < 0.001). No statistical differences were found between PGA group and RGH group regarding intra-vessel density absolute values and SNR. Qualitative analysis, evaluated by two radiologists in "double blind", did not reveal any significant difference in diagnostic quality of the two groups. The development of high-speed MDCT scans combined with modern IR allows an accurate evaluation of CABG with prospective ECG-gating protocols in a single breath hold, obtaining a significant reduction in radiation dose.

  17. Iterative metal artefact reduction (MAR) in postsurgical chest CT: comparison of three iMAR-algorithms.

    PubMed

    Aissa, Joel; Boos, Johannes; Sawicki, Lino Morris; Heinzler, Niklas; Krzymyk, Karl; Sedlmair, Martin; Kröpil, Patric; Antoch, Gerald; Thomas, Christoph

    2017-11-01

    The purpose of this study was to evaluate the impact of three novel iterative metal artefact (iMAR) algorithms on image quality and artefact degree in chest CT of patients with a variety of thoracic metallic implants. 27 postsurgical patients with thoracic implants who underwent clinical chest CT between March and May 2015 in clinical routine were retrospectively included. Images were retrospectively reconstructed with standard weighted filtered back projection (WFBP) and with three iMAR algorithms (iMAR-Algo1 = Cardiac algorithm, iMAR-Algo2 = Pacemaker algorithm and iMAR-Algo3 = ThoracicCoils algorithm). The subjective and objective image quality was assessed. Averaged over all artefacts, artefact degree was significantly lower for the iMAR-Algo1 (58.9 ± 48.5 HU), iMAR-Algo2 (52.7 ± 46.8 HU) and the iMAR-Algo3 (51.9 ± 46.1 HU) compared with WFBP (91.6 ± 81.6 HU, p < 0.01 for all). All iMAR reconstructed images showed significantly lower artefacts (p < 0.01) compared with the WFPB while there was no significant difference between the iMAR algorithms, respectively. iMAR-Algo2 and iMAR-Algo3 reconstructions decreased mild and moderate artefacts compared with WFBP and iMAR-Algo1 (p < 0.01). All three iMAR algorithms led to a significant reduction of metal artefacts and increase in overall image quality compared with WFBP in chest CT of patients with metallic implants in subjective and objective analysis. The iMARAlgo2 and iMARAlgo3 were best for mild artefacts. IMARAlgo1 was superior for severe artefacts. Advances in knowledge: Iterative MAR led to significant artefact reduction and increase image-quality compared with WFBP in CT after implementation of thoracic devices. Adjusting iMAR-algorithms to patients' metallic implants can help to improve image quality in CT.

  18. Radiation dose reduction in soft tissue neck CT using adaptive statistical iterative reconstruction (ASIR).

    PubMed

    Vachha, Behroze; Brodoefel, Harald; Wilcox, Carol; Hackney, David B; Moonis, Gul

    2013-12-01

    To compare objective and subjective image quality in neck CT images acquired at different tube current-time products (275 mAs and 340 mAs) and reconstructed with filtered-back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR). HIPAA-compliant study with IRB approval and waiver of informed consent. 66 consecutive patients were randomly assigned to undergo contrast-enhanced neck CT at a standard tube-current-time-product (340 mAs; n = 33) or reduced tube-current-time-product (275 mAs, n = 33). Data sets were reconstructed with FBP and 2 levels (30%, 40%) of ASIR-FBP blending at 340 mAs and 275 mAs. Two neuroradiologists assessed subjective image quality in a blinded and randomized manner. Volume CT dose index (CTDIvol), dose-length-product (DLP), effective dose, and objective image noise were recorded. Signal-to-noise ratio (SNR) was computed as mean attenuation in a region of interest in the sternocleidomastoid muscle divided by image noise. Compared with FBP, ASIR resulted in a reduction of image noise at both 340 mAs and 275 mAs. Reduction of tube current from 340 mAs to 275 mAs resulted in an increase in mean objective image noise (p=0.02) and a decrease in SNR (p = 0.03) when images were reconstructed with FBP. However, when the 275 mAs images were reconstructed using ASIR, the mean objective image noise and SNR were similar to those of the standard 340 mAs CT images reconstructed with FBP (p>0.05). Subjective image noise was ranked by both raters as either average or less-than-average irrespective of the tube current and iterative reconstruction technique. Adapting ASIR into neck CT protocols reduced effective dose by 17% without compromising image quality. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Analysis of Radiation Transport Due to Activated Coolant in the ITER Neutral Beam Injection Cell

    DOE PAGES

    Royston, Katherine; Wilson, Stephen C.; Risner, Joel M.; ...

    2017-07-26

    Detailed spatial distributions of the biological dose rate due to a variety of sources are required for the design of the ITER tokamak facility to ensure that all radiological zoning limits are met. During operation, water in the Integrated loop of Blanket, Edge-localized mode and vertical stabilization coils, and Divertor (IBED) cooling system will be activated by plasma neutrons and will flow out of the bioshield through a complex system of pipes and heat exchangers. This paper discusses the methods used to characterize the biological dose rate outside the tokamak complex due to 16N gamma radiation emitted by the activatedmore » coolant in the Neutral Beam Injection (NBI) cell of the tokamak building. Activated coolant will enter the NBI cell through the IBED Primary Heat Transfer System (PHTS), and the NBI PHTS will also become activated due to radiation streaming through the NBI system. To properly characterize these gamma sources, the production of 16N, the decay of 16N, and the flow of activated water through the coolant loops were modeled. The impact of conservative approximations on the solution was also examined. Once the source due to activated coolant was calculated, the resulting biological dose rate outside the north wall of the NBI cell was determined through the use of sophisticated variance reduction techniques. The AutomateD VAriaNce reducTion Generator (ADVANTG) software implements methods developed specifically to provide highly effective variance reduction for complex radiation transport simulations such as those encountered with ITER. Using ADVANTG with the Monte Carlo N-particle (MCNP) radiation transport code, radiation responses were calculated on a fine spatial mesh with a high degree of statistical accuracy. In conclusion, advanced visualization tools were also developed and used to determine pipe cell connectivity, to facilitate model checking, and to post-process the transport simulation results.« less

  20. Potential benefit of the CT adaptive statistical iterative reconstruction method for pediatric cardiac diagnosis

    NASA Astrophysics Data System (ADS)

    Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2010-04-01

    Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.

  1. The Mechanisms Underlying Changes in Broad Dimensions of Psychopathology During Cognitive Behavioral Therapy for Social Anxiety Disorder.

    PubMed

    Ogawa, Sei; Imai, Risa; Suzuki, Masako; Furukawa, Toshi A; Akechi, Tatsuo

    2017-12-01

    Social anxiety disorder (SAD) patients commonly have broad dimensions of psychopathology. This study investigated the relationship between a wide range of psychopathology and attention or cognitions during cognitive behavioral therapy (CBT) for SAD. We treated 96 SAD patients with group CBT. Using multiple regression analysis, we examined the associations between the changes in broad dimensions of psychopathology and the changes in self-focused attention or maladaptive cognitions in the course of CBT. The reduction in self-focused attention was related to the decreases in somatization, obsessive-compulsive, interpersonal sensitivity, anxiety, phobic anxiety, and global severity index. The reduction in maladaptive cognitions was associated with decreases in interpersonal sensitivity, depression, and global severity index. The present study suggests that changes in self-focused attention and maladaptive cognitions may predict broad dimensions of psychopathology changes in SAD patients over the course of CBT. For the purpose of improving a wide range of psychiatric symptoms with SAD patients in CBT, it may be useful to decrease self-focus attention and maladaptive cognitions.

  2. Stony Endocarp Dimension and Shape Variation in Prunus Section Prunus

    PubMed Central

    Depypere, Leander; Chaerle, Peter; Mijnsbrugge, Kristine Vander; Goetghebeur, Paul

    2007-01-01

    Background and Aims Identification of Prunus groups at subspecies or variety level is complicated by the wide range of variation and morphological transitional states. Knowledge of the degree of variability within and between species is a sine qua non for taxonomists. Here, a detailed study of endocarp dimension and shape variation for taxa of Prunus section Prunus is presented. Method The sample size necessary to obtain an estimation of the population mean with a precision of 5 % was determined by iteration. Two cases were considered: (1) the population represents an individual; and (2) the population represents a species. The intra-individual and intraspecific variation of Prunus endocarps was studied by analysing the coefficients of variance for dimension and shape parameters. Morphological variation among taxa was assessed using univariate statistics. The influence of the time of sampling and the level of hydration on endocarp dimensions and shape was examined by means of pairwise t-tests. In total, 14 endocarp characters were examined for five Eurasian plum taxa. Key Results All linear measurements and index values showed a low or normal variability on the individual and species level. In contrast, the parameter ‘Vertical Asymmetry’ had high coefficients of variance for one or more of the taxa studied. Of all dimension and shape parameters studied, only ‘Triangle’ differed significantly between mature endocarps of P. insititia sampled with a time difference of 1 month. The level of hydration affected endocarp dimensions and shape significantly. Conclusions Index values and the parameters ‘Perimeter’, ‘Area’, ‘Triangle’, ‘Ellipse’, ‘Circular’ and ‘Rectangular’, based on sample sizes and coefficients of variance, were found to be most appropriate for further taxonomic analysis. However, use of one, single endocarp parameter is not satisfactory for discrimination between Eurasian plum taxa, mainly because of overlapping ranges. Before analysing dried endocarps, full hydration is recommended, as this restores the original dimensions and shape. PMID:17965026

  3. Assessment of the dose reduction potential of a model-based iterative reconstruction algorithm using a task-based performance metrology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samei, Ehsan, E-mail: samei@duke.edu; Richard, Samuel

    2015-01-15

    Purpose: Different computed tomography (CT) reconstruction techniques offer different image quality attributes of resolution and noise, challenging the ability to compare their dose reduction potential against each other. The purpose of this study was to evaluate and compare the task-based imaging performance of CT systems to enable the assessment of the dose performance of a model-based iterative reconstruction (MBIR) to that of an adaptive statistical iterative reconstruction (ASIR) and a filtered back projection (FBP) technique. Methods: The ACR CT phantom (model 464) was imaged across a wide range of mA setting on a 64-slice CT scanner (GE Discovery CT750 HD,more » Waukesha, WI). Based on previous work, the resolution was evaluated in terms of a task-based modulation transfer function (MTF) using a circular-edge technique and images from the contrast inserts located in the ACR phantom. Noise performance was assessed in terms of the noise-power spectrum (NPS) measured from the uniform section of the phantom. The task-based MTF and NPS were combined with a task function to yield a task-based estimate of imaging performance, the detectability index (d′). The detectability index was computed as a function of dose for two imaging tasks corresponding to the detection of a relatively small and a relatively large feature (1.5 and 25 mm, respectively). The performance of MBIR in terms of the d′ was compared with that of ASIR and FBP to assess its dose reduction potential. Results: Results indicated that MBIR exhibits a variability spatial resolution with respect to object contrast and noise while significantly reducing image noise. The NPS measurements for MBIR indicated a noise texture with a low-pass quality compared to the typical midpass noise found in FBP-based CT images. At comparable dose, the d′ for MBIR was higher than those of FBP and ASIR by at least 61% and 19% for the small feature and the large feature tasks, respectively. Compared to FBP and ASIR, MBIR indicated a 46%–84% dose reduction potential, depending on task, without compromising the modeled detection performance. Conclusions: The presented methodology based on ACR phantom measurements extends current possibilities for the assessment of CT image quality under the complex resolution and noise characteristics exhibited with statistical and iterative reconstruction algorithms. The findings further suggest that MBIR can potentially make better use of the projections data to reduce CT dose by approximately a factor of 2. Alternatively, if the dose held unchanged, it can improve image quality by different levels for different tasks.« less

  4. Iterative Monte Carlo analysis of spin-dependent parton distributions

    DOE PAGES

    Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; ...

    2016-04-05

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d 2 moment of the nucleon within a global PDF analysis.« less

  5. The Correlation Fractal Dimension of Complex Networks

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Liu, Zhenzhen; Wang, Mogei

    2013-05-01

    The fractality of complex networks is studied by estimating the correlation dimensions of the networks. Comparing with the previous algorithms of estimating the box dimension, our algorithm achieves a significant reduction in time complexity. For four benchmark cases tested, that is, the Escherichia coli (E. Coli) metabolic network, the Homo sapiens protein interaction network (H. Sapiens PIN), the Saccharomyces cerevisiae protein interaction network (S. Cerevisiae PIN) and the World Wide Web (WWW), experiments are provided to demonstrate the validity of our algorithm.

  6. Iterative variational mode decomposition based automated detection of glaucoma using fundus images.

    PubMed

    Maheshwari, Shishir; Pachori, Ram Bilas; Kanhangad, Vivek; Bhandary, Sulatha V; Acharya, U Rajendra

    2017-09-01

    Glaucoma is one of the leading causes of permanent vision loss. It is an ocular disorder caused by increased fluid pressure within the eye. The clinical methods available for the diagnosis of glaucoma require skilled supervision. They are manual, time consuming, and out of reach of common people. Hence, there is a need for an automated glaucoma diagnosis system for mass screening. In this paper, we present a novel method for an automated diagnosis of glaucoma using digital fundus images. Variational mode decomposition (VMD) method is used in an iterative manner for image decomposition. Various features namely, Kapoor entropy, Renyi entropy, Yager entropy, and fractal dimensions are extracted from VMD components. ReliefF algorithm is used to select the discriminatory features and these features are then fed to the least squares support vector machine (LS-SVM) for classification. Our proposed method achieved classification accuracies of 95.19% and 94.79% using three-fold and ten-fold cross-validation strategies, respectively. This system can aid the ophthalmologists in confirming their manual reading of classes (glaucoma or normal) using fundus images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. An Improved Compressive Sensing and Received Signal Strength-Based Target Localization Algorithm with Unknown Target Population for Wireless Local Area Networks.

    PubMed

    Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang

    2017-05-30

    In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.

  8. Lax-Friedrichs sweeping scheme for static Hamilton-Jacobi equations

    NASA Astrophysics Data System (ADS)

    Kao, Chiu Yen; Osher, Stanley; Qian, Jianliang

    2004-05-01

    We propose a simple, fast sweeping method based on the Lax-Friedrichs monotone numerical Hamiltonian to approximate viscosity solutions of arbitrary static Hamilton-Jacobi equations in any number of spatial dimensions. By using the Lax-Friedrichs numerical Hamiltonian, we can easily obtain the solution at a specific grid point in terms of its neighbors, so that a Gauss-Seidel type nonlinear iterative method can be utilized. Furthermore, by incorporating a group-wise causality principle into the Gauss-Seidel iteration by following a finite group of characteristics, we have an easy-to-implement, sweeping-type, and fast convergent numerical method. However, unlike other methods based on the Godunov numerical Hamiltonian, some computational boundary conditions are needed in the implementation. We give a simple recipe which enforces a version of discrete min-max principle. Some convergence analysis is done for the one-dimensional eikonal equation. Extensive 2-D and 3-D numerical examples illustrate the efficiency and accuracy of the new approach. To our knowledge, this is the first fast numerical method based on discretizing the Hamilton-Jacobi equation directly without assuming convexity and/or homogeneity of the Hamiltonian.

  9. Local sharpening and subspace wavefront correction with predictive dynamic digital holography

    NASA Astrophysics Data System (ADS)

    Sulaiman, Sennan; Gibson, Steve

    2017-09-01

    Digital holography holds several advantages over conventional imaging and wavefront sensing, chief among these being significantly fewer and simpler optical components and the retrieval of complex field. Consequently, many imaging and sensing applications including microscopy and optical tweezing have turned to using digital holography. A significant obstacle for digital holography in real-time applications, such as wavefront sensing for high energy laser systems and high speed imaging for target racking, is the fact that digital holography is computationally intensive; it requires iterative virtual wavefront propagation and hill-climbing to optimize some sharpness criteria. It has been shown recently that minimum-variance wavefront prediction can be integrated with digital holography and image sharpening to reduce significantly large number of costly sharpening iterations required to achieve near-optimal wavefront correction. This paper demonstrates further gains in computational efficiency with localized sharpening in conjunction with predictive dynamic digital holography for real-time applications. The method optimizes sharpness of local regions in a detector plane by parallel independent wavefront correction on reduced-dimension subspaces of the complex field in a spectral plane.

  10. A Novel Hybrid Dimension Reduction Technique for Undersized High Dimensional Gene Expression Data Sets Using Information Complexity Criterion for Cancer Classification

    PubMed Central

    Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan

    2015-01-01

    Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836

  11. Identification of seedling cabbages and weeds using hyperspectral imaging

    USDA-ARS?s Scientific Manuscript database

    Target detectionis one of research focues for precision chemical application. This study developed a method to identify seedling cabbages and weeds using hyperspectral spectral imaging. In processing the image data, with ENVI software, after dimension reduction, noise reduction, de-correlation for h...

  12. Radiation dose reduction in CT with adaptive statistical iterative reconstruction (ASIR) for patients with bronchial carcinoma and intrapulmonary metastases.

    PubMed

    Schäfer, M-L; Lüdemann, L; Böning, G; Kahn, J; Fuchs, S; Hamm, B; Streitparth, F

    2016-05-01

    To compare the radiation dose and image quality of 64-row chest computed tomography (CT) in patients with bronchial carcinoma or intrapulmonary metastases using full-dose CT reconstructed with filtered back projection (FBP) at baseline and reduced dose with 40% adaptive statistical iterative reconstruction (ASIR) at follow-up. The chest CT images of patients who underwent FBP and ASIR studies were reviewed. Dose-length products (DLP), effective dose, and size-specific dose estimates (SSDEs) were obtained. Image quality was analysed quantitatively by signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) measurement. In addition, image quality was assessed by two blinded radiologists evaluating images for noise, contrast, artefacts, visibility of small structures, and diagnostic acceptability using a five-point scale. The ASIR studies showed 36% reduction in effective dose compared with the FBP studies. The qualitative and quantitative image quality was good to excellent in both protocols, without significant differences. There were also no significant differences for SNR except for the SNR of lung surrounding the tumour (FBP: 35±17, ASIR: 39±22). A protocol with 40% ASIR can provide approximately 36% dose reduction in chest CT of patients with bronchial carcinoma or intrapulmonary metastases while maintaining excellent image quality. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  13. Performances estimation of a rotary traveling wave ultrasonic motor based on two-dimension analytical model.

    PubMed

    Ming, Y; Peiwen, Q

    2001-03-01

    The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.

  14. A terracing operator for physical property mapping with potential field data

    USGS Publications Warehouse

    Cordell, L.; McCafferty, A.E.

    1989-01-01

    The terracing operator works iteratively on gravity or magnetic data, using the sense of the measured field's local curvature, to produce a field comprised of uniform domains separated by abrupt domain boundaries. The result is crudely proportional to a physical-property function defined in one (profile case) or two (map case) horizontal dimensions. This result can be extended to a physical-property model if its behavior in the third (vertical) dimension is defined, either arbitrarily or on the basis of the local geologic situation. The terracing algorithm is computationally fast and appropriate to use with very large digital data sets. The terracing operator was applied separately to aeromagnetic and gravity data from a 136km x 123km area in eastern Kansas. Results provide a reasonable good physical representation of both the gravity and the aeromagnetic data. Superposition of the results from the two data sets shows many areas of agreement that can be referenced to geologic features within the buried Precambrian crystalline basement. -from Authors

  15. Two-dimensional computer simulation of EMVJ and grating solar cells under AMO illumination

    NASA Technical Reports Server (NTRS)

    Gray, J. L.; Schwartz, R. J.

    1984-01-01

    A computer program, SCAP2D (Solar Cell Analysis Program in 2-Dimensions), is used to evaluate the Etched Multiple Vertical Junction (EMVJ) and grating solar cells. The aim is to demonstrate how SCAP2D can be used to evaluate cell designs. The cell designs studied are by no means optimal designs. The SCAP2D program solves the three coupled, nonlinear partial differential equations, Poisson's Equation and the hole and electron continuity equations, simultaneously in two-dimensions using finite differences to discretize the equations and Newton's Method to linearize them. The variables solved for are the electrostatic potential and the hole and electron concentrations. Each linear system of equations is solved directly by Gaussian Elimination. Convergence of the Newton Iteration is assumed when the largest correction to the electrostatic potential or hole or electron quasi-potential is less than some predetermined error. A typical problem involves 2000 nodes with a Jacobi matrix of order 6000 and a bandwidth of 243.

  16. Modeling of frequency-domain scalar wave equation with the average-derivative optimal scheme based on a multigrid-preconditioned iterative solver

    NASA Astrophysics Data System (ADS)

    Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue

    2018-01-01

    An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.

  17. Adaptive statistical iterative reconstruction use for radiation dose reduction in pediatric lower-extremity CT: impact on diagnostic image quality.

    PubMed

    Shah, Amisha; Rees, Mitchell; Kar, Erica; Bolton, Kimberly; Lee, Vincent; Panigrahy, Ashok

    2018-06-01

    For the past several years, increased levels of imaging radiation and cumulative radiation to children has been a significant concern. Although several measures have been taken to reduce radiation dose during computed tomography (CT) scan, the newer dose reduction software adaptive statistical iterative reconstruction (ASIR) has been an effective technique in reducing radiation dose. To our knowledge, no studies are published that assess the effect of ASIR on extremity CT scans in children. To compare radiation dose, image noise, and subjective image quality in pediatric lower extremity CT scans acquired with and without ASIR. The study group consisted of 53 patients imaged on a CT scanner equipped with ASIR software. The control group consisted of 37 patients whose CT images were acquired without ASIR. Image noise, Computed Tomography Dose Index (CTDI) and dose length product (DLP) were measured. Two pediatric radiologists rated the studies in subjective categories: image sharpness, noise, diagnostic acceptability, and artifacts. The CTDI (p value = 0.0184) and DLP (p value <0.0002) were significantly decreased with the use of ASIR compared with non-ASIR studies. However, the subjective ratings for sharpness (p < 0.0001) and diagnostic acceptability of the ASIR images (p < 0.0128) were decreased compared with standard, non-ASIR CT studies. Adaptive statistical iterative reconstruction reduces radiation dose for lower extremity CTs in children, but at the expense of diagnostic imaging quality. Further studies are warranted to determine the specific utility of ASIR for pediatric musculoskeletal CT imaging.

  18. CT image reconstruction with half precision floating-point values.

    PubMed

    Maaß, Clemens; Baer, Matthias; Kachelrieß, Marc

    2011-07-01

    Analytic CT image reconstruction is a computationally demanding task. Currently, the even more demanding iterative reconstruction algorithms find their way into clinical routine because their image quality is superior to analytic image reconstruction. The authors thoroughly analyze a so far unconsidered but valuable tool of tomorrow's reconstruction hardware (CPU and GPU) that allows implementing the forward projection and backprojection steps, which are the computationally most demanding parts of any reconstruction algorithm, much more efficiently. Instead of the standard 32 bit floating-point values (float), a recently standardized floating-point value with 16 bit (half) is adopted for data representation in image domain and in rawdata domain. The reduction in the total data amount reduces the traffic on the memory bus, which is the bottleneck of today's high-performance algorithms, by 50%. In CT simulations and CT measurements, float reconstructions (gold standard) and half reconstructions are visually compared via difference images and by quantitative image quality evaluation. This is done for analytical reconstruction (filtered backprojection) and iterative reconstruction (ordered subset SART). The magnitude of quantization noise, which is caused by a reduction in the data precision of both rawdata and image data during image reconstruction, is negligible. This is clearly shown for filtered backprojection and iterative ordered subset SART reconstruction. In filtered backprojection, the implementation of the backprojection should be optimized for low data precision if the image data are represented in half format. In ordered subset SART image reconstruction, no adaptations are necessary and the convergence speed remains unchanged. Half precision floating-point values allow to speed up CT image reconstruction without compromising image quality.

  19. Correlation Dimension Estimates of Global and Local Temperature Data.

    NASA Astrophysics Data System (ADS)

    Wang, Qiang

    1995-11-01

    The author has attempted to detect the presence of low-dimensional deterministic chaos in temperature data by estimating the correlation dimension with the Hill estimate that has been recently developed by Mikosch and Wang. There is no convincing evidence of low dimensionality with either global dataset (Southern Hemisphere monthly average temperatures from 1858 to 1984) or local temperature dataset (daily minimums at Auckland, New Zealand). Any apparent reduction in the dimension estimates appears to be due large1y, if not entirely, to effects of statistical bias, but neither is it a purely random stochastic process. The dimension of the climatic attractor may be significantly larger than 10.

  20. Analysis of metal artifact reduction tools for dental hardware in CT scans of the oral cavity: kVp, iterative reconstruction, dual-energy CT, metal artifact reduction software: does it make a difference?

    PubMed

    De Crop, An; Casselman, Jan; Van Hoof, Tom; Dierens, Melissa; Vereecke, Elke; Bossu, Nicolas; Pamplona, Jaime; D'Herde, Katharina; Thierens, Hubert; Bacher, Klaus

    2015-08-01

    Metal artifacts may negatively affect radiologic assessment in the oral cavity. The aim of this study was to evaluate different metal artifact reduction techniques for metal artifacts induced by dental hardware in CT scans of the oral cavity. Clinical image quality was assessed using a Thiel-embalmed cadaver. A Catphan phantom and a polymethylmethacrylate (PMMA) phantom were used to evaluate physical-technical image quality parameters such as artifact area, artifact index (AI), and contrast detail (IQFinv). Metal cylinders were inserted in each phantom to create metal artifacts. CT images of both phantoms and the Thiel-embalmed cadaver were acquired on a multislice CT scanner using 80, 100, 120, and 140 kVp; model-based iterative reconstruction (Veo); and synthesized monochromatic keV images with and without metal artifact reduction software (MARs). Four radiologists assessed the clinical image quality, using an image criteria score (ICS). Significant influence of increasing kVp and the use of Veo was found on clinical image quality (p = 0.007 and p = 0.014, respectively). Application of MARs resulted in a smaller artifact area (p < 0.05). However, MARs reconstructed images resulted in lower ICS. Of all investigated techniques, Veo shows to be most promising, with a significant improvement of both the clinical and physical-technical image quality without adversely affecting contrast detail. MARs reconstruction in CT images of the oral cavity to reduce dental hardware metallic artifacts is not sufficient and may even adversely influence the image quality.

  1. A novel scaling law relating the geometrical dimensions of a photocathode radio frequency gun to its radio frequency properties

    NASA Astrophysics Data System (ADS)

    Lal, Shankar; Pant, K. K.; Krishnagopal, S.

    2011-12-01

    Developing a photocathode RF gun with the desired RF properties of the π-mode, such as field balance (eb) ˜1, resonant frequency fπ = 2856 MHz, and waveguide-to-cavity coupling coefficient βπ ˜1, requires precise tuning of the resonant frequencies of the independent full- and half-cells (ff and fh), and of the waveguide-to-full-cell coupling coefficient (βf). While contemporary electromagnetic codes and precision machining capability have made it possible to design and tune independent cells of a photocathode RF gun for desired RF properties, thereby eliminating the need for tuning, access to such computational resources and quality of machining is not very widespread. Therefore, many such structures require tuning after machining by employing conventional tuning techniques that are iterative in nature. Any procedure that improves understanding of the tuning process and consequently reduces the number of iterations and the associated risks in tuning a photocathode gun would, therefore, be useful. In this paper, we discuss a method devised by us to tune a photocathode RF gun for desired RF properties under operating conditions. We develop and employ a simple scaling law that accounts for inter-dependence between frequency of independent cells and waveguide-to-cavity coupling coefficient, and the effect of brazing clearance for joining of the two cells. The method has been employed to successfully develop multiple 1.6 cell BNL/SLAC/UCLA type S-band photocathode RF guns with the desired RF properties, without the need to tune them by a tiresome cut-and-measure process. Our analysis also provides a physical insight into how the geometrical dimensions affect the RF properties of the photo-cathode RF gun.

  2. Ensemble Kalman filter inference of spatially-varying Manning's n coefficients in the coastal ocean

    NASA Astrophysics Data System (ADS)

    Siripatana, Adil; Mayo, Talea; Knio, Omar; Dawson, Clint; Maître, Olivier Le; Hoteit, Ibrahim

    2018-07-01

    Ensemble Kalman (EnKF) filtering is an established framework for large scale state estimation problems. EnKFs can also be used for state-parameter estimation, using the so-called "Joint-EnKF" approach. The idea is simply to augment the state vector with the parameters to be estimated and assign invariant dynamics for the time evolution of the parameters. In this contribution, we investigate the efficiency of the Joint-EnKF for estimating spatially-varying Manning's n coefficients used to define the bottom roughness in the Shallow Water Equations (SWEs) of a coastal ocean model. Observation System Simulation Experiments (OSSEs) are conducted using the ADvanced CIRCulation (ADCIRC) model, which solves a modified form of the Shallow Water Equations. A deterministic EnKF, the Singular Evolutive Interpolated Kalman (SEIK) filter, is used to estimate a vector of Manning's n coefficients defined at the model nodal points by assimilating synthetic water elevation data. It is found that with reasonable ensemble size (O (10)) , the filter's estimate converges to the reference Manning's field. To enhance performance, we have further reduced the dimension of the parameter search space through a Karhunen-Loéve (KL) expansion. We have also iterated on the filter update step to better account for the nonlinearity of the parameter estimation problem. We study the sensitivity of the system to the ensemble size, localization scale, dimension of retained KL modes, and number of iterations. The performance of the proposed framework in term of estimation accuracy suggests that a well-tuned Joint-EnKF provides a promising robust approach to infer spatially varying seabed roughness parameters in the context of coastal ocean modeling.

  3. Defining competency-based evaluation objectives in family medicine

    PubMed Central

    Lawrence, Kathrine; Allen, Tim; Brailovsky, Carlos; Crichton, Tom; Bethune, Cheri; Donoff, Michel; Laughlin, Tom; Wetmore, Stephen; Carpentier, Marie-Pierre; Visser, Shaun

    2011-01-01

    Abstract Objective To develop key features for priority topics previously identified by the College of Family Physicians of Canada that, together with skill dimensions and phases of the clinical encounter, broadly describe competence in family medicine. Design Modified nominal group methodology, which was used to develop key features for each priority topic through an iterative process. Setting The College of Family Physicians of Canada. Participants An expert group of 7 family physicians and 1 educational consultant, all of whom had experience in assessing competence in family medicine. Group members represented the Canadian family medicine context with respect to region, sex, language, community type, and experience. Methods The group used a modified Delphi process to derive a detailed operational definition of competence, using multiple iterations until consensus was achieved for the items under discussion. The group met 3 to 4 times a year from 2000 to 2007. Main findings The group analyzed 99 topics and generated 773 key features. There were 2 to 20 (average 7.8) key features per topic; 63% of the key features focused on the diagnostic phase of the clinical encounter. Conclusion This project expands previous descriptions of the process of generating key features for assessment, and removes this process from the context of written examinations. A key-features analysis of topics focuses on higher-order cognitive processes of clinical competence. The project did not define all the skill dimensions of competence to the same degree, but it clearly identified those requiring further definition. This work generates part of a discipline-specific, competency-based definition of family medicine for assessment purposes. It limits the domain for assessment purposes, which is an advantage for the teaching and assessment of learners. A validation study on the content of this work would ensure that it truly reflects competence in family medicine. PMID:21998245

  4. Simultaneous multigrid techniques for nonlinear eigenvalue problems: Solutions of the nonlinear Schrödinger-Poisson eigenvalue problem in two and three dimensions

    NASA Astrophysics Data System (ADS)

    Costiner, Sorin; Ta'asan, Shlomo

    1995-07-01

    Algorithms for nonlinear eigenvalue problems (EP's) often require solving self-consistently a large number of EP's. Convergence difficulties may occur if the solution is not sought in an appropriate region, if global constraints have to be satisfied, or if close or equal eigenvalues are present. Multigrid (MG) algorithms for nonlinear problems and for EP's obtained from discretizations of partial differential EP have often been shown to be more efficient than single level algorithms. This paper presents MG techniques and a MG algorithm for nonlinear Schrödinger Poisson EP's. The algorithm overcomes the above mentioned difficulties combining the following techniques: a MG simultaneous treatment of the eigenvectors and nonlinearity, and with the global constrains; MG stable subspace continuation techniques for the treatment of nonlinearity; and a MG projection coupled with backrotations for separation of solutions. These techniques keep the solutions in an appropriate region, where the algorithm converges fast, and reduce the large number of self-consistent iterations to only a few or one MG simultaneous iteration. The MG projection makes it possible to efficiently overcome difficulties related to clusters of close and equal eigenvalues. Computational examples for the nonlinear Schrödinger-Poisson EP in two and three dimensions, presenting special computational difficulties that are due to the nonlinearity and to the equal and closely clustered eigenvalues are demonstrated. For these cases, the algorithm requires O(qN) operations for the calculation of q eigenvectors of size N and for the corresponding eigenvalues. One MG simultaneous cycle per fine level was performed. The total computational cost is equivalent to only a few Gauss-Seidel relaxations per eigenvector. An asymptotic convergence rate of 0.15 per MG cycle is attained.

  5. Small-angle scattering from the Cantor surface fractal on the plane and the Koch snowflake

    NASA Astrophysics Data System (ADS)

    Cherny, Alexander Yu.; Anitas, Eugen M.; Osipov, Vladimir A.; Kuklin, Alexander I.

    The small-angle scattering (SAS) from the Cantor surface fractal on the plane and Koch snowflake is considered. We develop the construction algorithm for the Koch snowflake, which makes possible the recurrence relation for the scattering amplitude. The surface fractals can be decomposed into a sum of surface mass fractals for arbitrary fractal iteration, which enables various approximations for the scattering intensity. It is shown that for the Cantor fractal, one can neglect with a good accuracy the correlations between the mass fractal amplitudes, while for the Koch snowflake, these correlations are important. It is shown that nevertheless, the correlations can be build in the mass fractal amplitudes, which explains the decay of the scattering intensity $I(q)\\sim q^{D_{\\mathrm{s}}-4}$ with $1 < D_{\\mathrm{s}} < 2$ being the fractal dimension of the perimeter. The curve $I(q)q^{4-D_{\\mathrm{s}}}$ is found to be log-periodic in the fractal region with the period equal to the scaling factor of the fractal. The log-periodicity arises from the self-similarity of sizes of basic structural units rather than from correlations between their distances. A recurrence relation is obtained for the radius of gyration of Koch snowflake, which is solved in the limit of infinite iterations. The present analysis allows us to obtain additional information from SAS data, such as the edges of the fractal regions, the fractal iteration number and the scaling factor.

  6. Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification

    NASA Astrophysics Data System (ADS)

    Sharif, I.; Khare, S.

    2014-11-01

    With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.

  7. Evaluation of a dimension-reduction-based statistical technique for Temperature, Water Vapour and Ozone retrievals from IASI radiances

    NASA Astrophysics Data System (ADS)

    Amato, Umberto; Antoniadis, Anestis; De Feis, Italia; Masiello, Guido; Matricardi, Marco; Serio, Carmine

    2009-03-01

    Remote sensing of atmosphere is changing rapidly thanks to the development of high spectral resolution infrared space-borne sensors. The aim is to provide more and more accurate information on the lower atmosphere, as requested by the World Meteorological Organization (WMO), to improve reliability and time span of weather forecasts plus Earth's monitoring. In this paper we show the results we have obtained on a set of Infrared Atmospheric Sounding Interferometer (IASI) observations using a new statistical strategy based on dimension reduction. Retrievals have been compared to time-space colocated ECMWF analysis for temperature, water vapor and ozone.

  8. Fractal structures and fractal functions as disease indicators

    USGS Publications Warehouse

    Escos, J.M; Alados, C.L.; Emlen, J.M.

    1995-01-01

    Developmental instability is an early indicator of stress, and has been used to monitor the impacts of human disturbance on natural ecosystems. Here we investigate the use of different measures of developmental instability on two species, green peppers (Capsicum annuum), a plant, and Spanish ibex (Capra pyrenaica), an animal. For green peppers we compared the variance in allometric relationship between control plants, and a treatment group infected with the tomato spotted wilt virus. The results show that infected plants have a greater variance about the allometric regression line than the control plants. We also observed a reduction in complexity of branch structure in green pepper with a viral infection. Box-counting fractal dimension of branch architecture declined under stress infection. We also tested the reduction in complexity of behavioral patterns under stress situations in Spanish ibex (Capra pyrenaica). Fractal dimension of head-lift frequency distribution measures predator detection efficiency. This dimension decreased under stressful conditions, such as advanced pregnancy and parasitic infection. Feeding distribution activities reflect food searching efficiency. Power spectral analysis proves to be the most powerful tool for character- izing fractal behavior, revealing a reduction in complexity of time distribution activity under parasitic infection.

  9. Shape component analysis: structure-preserving dimension reduction on biological shape spaces.

    PubMed

    Lee, Hao-Chih; Liao, Tao; Zhang, Yongjie Jessica; Yang, Ge

    2016-03-01

    Quantitative shape analysis is required by a wide range of biological studies across diverse scales, ranging from molecules to cells and organisms. In particular, high-throughput and systems-level studies of biological structures and functions have started to produce large volumes of complex high-dimensional shape data. Analysis and understanding of high-dimensional biological shape data require dimension-reduction techniques. We have developed a technique for non-linear dimension reduction of 2D and 3D biological shape representations on their Riemannian spaces. A key feature of this technique is that it preserves distances between different shapes in an embedded low-dimensional shape space. We demonstrate an application of this technique by combining it with non-linear mean-shift clustering on the Riemannian spaces for unsupervised clustering of shapes of cellular organelles and proteins. Source code and data for reproducing results of this article are freely available at https://github.com/ccdlcmu/shape_component_analysis_Matlab The implementation was made in MATLAB and supported on MS Windows, Linux and Mac OS. geyang@andrew.cmu.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Improving multi-objective reservoir operation optimization with sensitivity-informed dimension reduction

    NASA Astrophysics Data System (ADS)

    Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.

    2015-08-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aliev, Alikram N.; Cebeci, Hakan; Dereli, Tekin

    We present an exact solution describing a stationary and axisymmetric object with electromagnetic and dilaton fields. The solution generalizes the usual Kerr-Taub-NUT (Newman-Unti-Tamburino) spacetime in general relativity and is obtained by boosting this spacetime in the fifth dimension and performing a Kaluza-Klein reduction to four dimensions. We also discuss the physical parameters of this solution and calculate its gyromagnetic ratio.

  12. Old Tails and New Trails in High Dimensions

    ERIC Educational Resources Information Center

    Halevy, Avner

    2013-01-01

    We discuss the motivation for dimension reduction in the context of the modern data revolution and introduce a key result in this field, the Johnson-Lindenstrauss flattening lemma. Then we leap into high-dimensional space for a glimpse of the phenomenon called concentration of measure, and use it to sketch a proof of the lemma. We end by tying…

  13. Adaptive Statistical Iterative Reconstruction-V: Impact on Image Quality in Ultralow-Dose Coronary Computed Tomography Angiography.

    PubMed

    Benz, Dominik C; Gräni, Christoph; Mikulicic, Fran; Vontobel, Jan; Fuchs, Tobias A; Possner, Mathias; Clerc, Olivier F; Stehli, Julia; Gaemperli, Oliver; Pazhenkottil, Aju P; Buechel, Ronny R; Kaufmann, Philipp A

    The clinical utility of a latest generation iterative reconstruction algorithm (adaptive statistical iterative reconstruction [ASiR-V]) has yet to be elucidated for coronary computed tomography angiography (CCTA). This study evaluates the impact of ASiR-V on signal, noise and image quality in CCTA. Sixty-five patients underwent clinically indicated CCTA on a 256-slice CT scanner using an ultralow-dose protocol. Data sets from each patient were reconstructed at 6 different levels of ASiR-V. Signal intensity was measured by placing a region of interest in the aortic root, LMA, and RCA. Similarly, noise was measured in the aortic root. Image quality was visually assessed by 2 readers. Median radiation dose was 0.49 mSv. Image noise decreased with increasing levels of ASiR-V resulting in a significant increase in signal-to-noise ratio in the RCA and LMA (P < 0.001). Correspondingly, image quality significantly increased with higher levels of ASiR-V (P < 0.001). ASiR-V yields substantial noise reduction and improved image quality enabling introduction of ultralow-dose CCTA.

  14. Time Dependent Predictive Modeling of DIII-D ITER Baseline Scenario using Predictive TRANSP

    NASA Astrophysics Data System (ADS)

    Grierson, B. A.; Andre, R. G.; Budny, R. V.; Solomon, W. M.; Yuan, X.; Candy, J.; Pinsker, R. I.; Staebler, G. M.; Holland, C.; Rafiq, T.

    2015-11-01

    ITER baseline scenario discharges on DIII-D are modeled with TGLF and MMM transitioning from combined ECH (3.3MW) +NBI(2.8MW) heating to NBI only (3.0 MW) heating maintaining βN = 2.0 on DIII-D predicting temperature, density and rotation for comparison to experimental measurements. These models capture the reduction of confinement associated with direct electron heating H98y2 = 0.89 vs. 1.0) consistent with stiff electron transport. Reasonable agreement between experimental and modeled temperature profiles is achieved for both heating methods, whereas density and momentum predictions differ significantly. Transport fluxes from TGLF indicate that on DIII-D the electron energy flux has reached a transition from low-k to high-k turbulence with more stiff high-k transport that inhibits an increase in core electron stored energy with additional electron heating. Projections to ITER also indicate high electron stiffness. Supported by US DOE DE-AC02-09CH11466, DE-FC02-04ER54698, DE-FG02-07ER54917, DE-FG02-92-ER54141.

  15. The fusion code XGC: Enabling kinetic study of multi-scale edge turbulent transport in ITER [Book Chapter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, Eduardo; Abbott, Stephen; Koskela, Tuomas

    The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning for balancing computational work in pushing particlesmore » and in grid related work, scalable and accurate discretization algorithms for non-linear Coulomb collisions, and communication-avoiding subcycling technology for pushing particles on both CPUs and GPUs are also utilized to dramatically improve the scalability and time-to-solution, hence enabling the difficult kinetic ITER edge simulation on a present-day leadership class computer.« less

  16. Low-light-level image super-resolution reconstruction based on iterative projection photon localization algorithm

    NASA Astrophysics Data System (ADS)

    Ying, Changsheng; Zhao, Peng; Li, Ye

    2018-01-01

    The intensified charge-coupled device (ICCD) is widely used in the field of low-light-level (LLL) imaging. The LLL images captured by ICCD suffer from low spatial resolution and contrast, and the target details can hardly be recognized. Super-resolution (SR) reconstruction of LLL images captured by ICCDs is a challenging issue. The dispersion in the double-proximity-focused image intensifier is the main factor that leads to a reduction in image resolution and contrast. We divide the integration time into subintervals that are short enough to get photon images, so the overlapping effect and overstacking effect of dispersion can be eliminated. We propose an SR reconstruction algorithm based on iterative projection photon localization. In the iterative process, the photon image is sliced by projection planes, and photons are screened under the constraints of regularity. The accurate position information of the incident photons in the reconstructed SR image is obtained by the weighted centroids calculation. The experimental results show that the spatial resolution and contrast of our SR image are significantly improved.

  17. Contributing Factors to Driver's Over-trust in a Driving Support System for Workload Reduction

    NASA Astrophysics Data System (ADS)

    Itoh, Makoto

    Avoiding over-trust in machines is a vital issue in order to establish intelligent driver support systems. It is necessary to distinguish systems for workload reduction from systems for accident prevention/mitigation. This study focuses on over-trust in an Adaptive Cruise Control (ACC) system as a typical driving support system for workload reduction. By conducting an experiment, we obtained a case in which a driver trusted the ACC system too much. Concretely speaking, the driver just watched the ACC system crashing into a stopped car even though the ACC system was designed to ignore such stopped cars. This paper investigates possible contributing factors to the driver' s over-trust in the ACC system. The results suggest that emerging trust in the dimension of performance may cause over-trust in the dimension of method or purpose.

  18. A principled dimension-reduction method for the population density approach to modeling networks of neurons with synaptic dynamics.

    PubMed

    Ly, Cheng

    2013-10-01

    The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.

  19. A quantitative comparison of noise reduction across five commercial (hybrid and model-based) iterative reconstruction techniques: an anthropomorphic phantom study.

    PubMed

    Patino, Manuel; Fuentes, Jorge M; Hayano, Koichi; Kambadakone, Avinash R; Uyeda, Jennifer W; Sahani, Dushyant V

    2015-02-01

    OBJECTIVE. The objective of our study was to compare the performance of three hybrid iterative reconstruction techniques (IRTs) (ASiR, iDose4, SAFIRE) and their respective strengths for image noise reduction on low-dose CT examinations using filtered back projection (FBP) as the standard reference. Also, we compared the performance of these three hybrid IRTs with two model-based IRTs (Veo and IMR) for image noise reduction on low-dose examinations. MATERIALS AND METHODS. An anthropomorphic abdomen phantom was scanned at 100 and 120 kVp and different tube current-exposure time products (25-100 mAs) on three CT systems (for ASiR and Veo, Discovery CT750 HD; for iDose4 and IMR, Brilliance iCT; and for SAFIRE, Somatom Definition Flash). Images were reconstructed using FBP and using IRTs at various strengths. Nine noise measurements (mean ROI size, 423 mm(2)) on extracolonic fat for the different strengths of IRTs were recorded and compared with FBP using ANOVA. Radiation dose, which was measured as the volume CT dose index and dose-length product, was also compared. RESULTS. There were no significant differences in radiation dose and image noise among the scanners when FBP was used (p > 0.05). Gradual image noise reduction was observed with each increasing increment of hybrid IRT strength, with a maximum noise suppression of approximately 50% (48.2-53.9%). Similar noise reduction was achieved on the scanners by applying specific hybrid IRT strengths. Maximum noise reduction was higher on model-based IRTs (68.3-81.1%) than hybrid IRTs (48.2-53.9%) (p < 0.05). CONCLUSION. When constant scanning parameters are used, radiation dose and image noise on FBP are similar for CT scanners made by different manufacturers. Significant image noise reduction is achieved on low-dose CT examinations rendered with IRTs. The image noise on various scanners can be matched by applying specific hybrid IRT strengths. Model-based IRTs attain substantially higher noise reduction than hybrid IRTs irrespective of the radiation dose.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brady, S; Shulkin, B

    Purpose: To develop ultra-low dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultra-low doses (10–35 mAs). CT quantitation: noise, low-contrast resolution, and CT numbers for eleven tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% CTDIvol (0.39/3.64; mGy) radiation dose from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET images were reconstructed withmore » the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUVbw) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation organ dose, as derived from patient exam size specific dose estimate (SSDE), was converted to effective dose using the standard ICRP report 103 method. Effective dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative patient population dose reduction and noise control. Results: CT numbers were constant to within 10% from the non-dose reduced CTAC image down to 90% dose reduction. No change in SUVbw, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols reconstructed with ASiR and down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62%–86% (3.2/8.3−0.9/6.2; mSv). Noise magnitude in dose-reduced patient images increased but was not statistically different from pre dose-reduced patient images. Conclusion: Using ASiR allowed for aggressive reduction in CTAC dose with no change in PET reconstructed images while maintaining sufficient image quality for co-localization of hybrid CT anatomy and PET radioisotope uptake.« less

  1. Integrating dimension reduction and out-of-sample extension in automated classification of ex vivo human patellar cartilage on phase contrast X-ray computed tomography.

    PubMed

    Nagarajan, Mahesh B; Coan, Paola; Huber, Markus B; Diemoz, Paul C; Wismüller, Axel

    2015-01-01

    Phase contrast X-ray computed tomography (PCI-CT) has been demonstrated as a novel imaging technique that can visualize human cartilage with high spatial resolution and soft tissue contrast. Different textural approaches have been previously investigated for characterizing chondrocyte organization on PCI-CT to enable classification of healthy and osteoarthritic cartilage. However, the large size of feature sets extracted in such studies motivates an investigation into algorithmic feature reduction for computing efficient feature representations without compromising their discriminatory power. For this purpose, geometrical feature sets derived from the scaling index method (SIM) were extracted from 1392 volumes of interest (VOI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. The extracted feature sets were subject to linear and non-linear dimension reduction techniques as well as feature selection based on evaluation of mutual information criteria. The reduced feature set was subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Our results show that the classification performance achieved by 9-D SIM-derived geometric feature sets (AUC: 0.96 ± 0.02) can be maintained with 2-D representations computed from both dimension reduction and feature selection (AUC values as high as 0.97 ± 0.02). Thus, such feature reduction techniques can offer a high degree of compaction to large feature sets extracted from PCI-CT images while maintaining their ability to characterize the underlying chondrocyte patterns.

  2. Penalized weighted least-squares approach for low-dose x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-03-01

    The noise of low-dose computed tomography (CT) sinogram follows approximately a Gaussian distribution with nonlinear dependence between the sample mean and variance. The noise is statistically uncorrelated among detector bins at any view angle. However the correlation coefficient matrix of data signal indicates a strong signal correlation among neighboring views. Based on above observations, Karhunen-Loeve (KL) transform can be used to de-correlate the signal among the neighboring views. In each KL component, a penalized weighted least-squares (PWLS) objective function can be constructed and optimal sinogram can be estimated by minimizing the objective function, followed by filtered backprojection (FBP) for CT image reconstruction. In this work, we compared the KL-PWLS method with an iterative image reconstruction algorithm, which uses the Gauss-Seidel iterative calculation to minimize the PWLS objective function in image domain. We also compared the KL-PWLS with an iterative sinogram smoothing algorithm, which uses the iterated conditional mode calculation to minimize the PWLS objective function in sinogram space, followed by FBP for image reconstruction. Phantom experiments show a comparable performance of these three PWLS methods in suppressing the noise-induced artifacts and preserving resolution in reconstructed images. Computer simulation concurs with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS noise reduction may have the advantage in computation for low-dose CT imaging, especially for dynamic high-resolution studies.

  3. Iterative raw measurements restoration method with penalized weighted least squares approach for low-dose CT

    NASA Astrophysics Data System (ADS)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu

    2014-03-01

    Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.

  4. Limiting CT radiation dose in children with craniosynostosis: phantom study using model-based iterative reconstruction.

    PubMed

    Kaasalainen, Touko; Palmu, Kirsi; Lampinen, Anniina; Reijonen, Vappu; Leikola, Junnu; Kivisaari, Riku; Kortesniemi, Mika

    2015-09-01

    Medical professionals need to exercise particular caution when developing CT scanning protocols for children who require multiple CT studies, such as those with craniosynostosis. To evaluate the utility of ultra-low-dose CT protocols with model-based iterative reconstruction techniques for craniosynostosis imaging. We scanned two pediatric anthropomorphic phantoms with a 64-slice CT scanner using different low-dose protocols for craniosynostosis. We measured organ doses in the head region with metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters. Numerical simulations served to estimate organ and effective doses. We objectively and subjectively evaluated the quality of images produced by adaptive statistical iterative reconstruction (ASiR) 30%, ASiR 50% and Veo (all by GE Healthcare, Waukesha, WI). Image noise and contrast were determined for different tissues. Mean organ dose with the newborn phantom was decreased up to 83% compared to the routine protocol when using ultra-low-dose scanning settings. Similarly, for the 5-year phantom the greatest radiation dose reduction was 88%. The numerical simulations supported the findings with MOSFET measurements. The image quality remained adequate with Veo reconstruction, even at the lowest dose level. Craniosynostosis CT with model-based iterative reconstruction could be performed with a 20-μSv effective dose, corresponding to the radiation exposure of plain skull radiography, without compromising required image quality.

  5. Increasing feasibility of the field-programmable gate array implementation of an iterative image registration using a kernel-warping algorithm

    NASA Astrophysics Data System (ADS)

    Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.

    2017-09-01

    Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.

  6. Implementation of spectral clustering with partitioning around medoids (PAM) algorithm on microarray data of carcinoma

    NASA Astrophysics Data System (ADS)

    Cahyaningrum, Rosalia D.; Bustamam, Alhadi; Siswantining, Titin

    2017-03-01

    Technology of microarray became one of the imperative tools in life science to observe the gene expression levels, one of which is the expression of the genes of people with carcinoma. Carcinoma is a cancer that forms in the epithelial tissue. These data can be analyzed such as the identification expressions hereditary gene and also build classifications that can be used to improve diagnosis of carcinoma. Microarray data usually served in large dimension that most methods require large computing time to do the grouping. Therefore, this study uses spectral clustering method which allows to work with any object for reduces dimension. Spectral clustering method is a method based on spectral decomposition of the matrix which is represented in the form of a graph. After the data dimensions are reduced, then the data are partitioned. One of the famous partition method is Partitioning Around Medoids (PAM) which is minimize the objective function with exchanges all the non-medoid points into medoid point iteratively until converge. Objectivity of this research is to implement methods spectral clustering and partitioning algorithm PAM to obtain groups of 7457 genes with carcinoma based on the similarity value. The result in this study is two groups of genes with carcinoma.

  7. Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Pazner, Will; Persson, Per-Olof

    2018-02-01

    In this paper, we develop a new tensor-product based preconditioner for discontinuous Galerkin methods with polynomial degrees higher than those typically employed. This preconditioner uses an automatic, purely algebraic method to approximate the exact block Jacobi preconditioner by Kronecker products of several small, one-dimensional matrices. Traditional matrix-based preconditioners require O (p2d) storage and O (p3d) computational work, where p is the degree of basis polynomials used, and d is the spatial dimension. Our SVD-based tensor-product preconditioner requires O (p d + 1) storage, O (p d + 1) work in two spatial dimensions, and O (p d + 2) work in three spatial dimensions. Combined with a matrix-free Newton-Krylov solver, these preconditioners allow for the solution of DG systems in linear time in p per degree of freedom in 2D, and reduce the computational complexity from O (p9) to O (p5) in 3D. Numerical results are shown in 2D and 3D for the advection, Euler, and Navier-Stokes equations, using polynomials of degree up to p = 30. For many test cases, the preconditioner results in similar iteration counts when compared with the exact block Jacobi preconditioner, and performance is significantly improved for high polynomial degrees p.

  8. Assessment of domestic cat personality, as perceived by 416 owners, suggests six dimensions.

    PubMed

    Bennett, Pauleen C; Rutter, Nicholas J; Woodhead, Jessica K; Howell, Tiffani J

    2017-08-01

    Understanding individual behavioral differences in domestic cats could lead to improved selection when potential cat owners choose a pet with whom to share their lives, along with consequent improvements in cat welfare. Yet very few attempts have been made to elicit cat personality dimensions using the trait-based exploratory approaches applied previously, with some success, to humans and dogs. In this study, a list of over 200 adjectives used to describe cat personality was assembled. This list was refined by two focus groups. A sample of 416 adult cat owners then rated a cat they knew well on each of 118 retained words. An iterative analytical approach was used to identify 29 words which formed six personality dimensions: Playfulness, Nervousness, Amiability, Dominance, Demandingness, and Gullibility. Chronbach's alpha scores for these dimensions ranged from 0.63 to 0.8 and, together, they explained 56.08% of the total variance. Very few significant correlations were found between participant scores on the personality dimensions and descriptive variables such as owner age, cat age and owner cat-owning experience, and these were all weak to barely moderate in strength (r≤0.30). There was also only one significant group difference based on cat sex. Importantly, however, several cat personality scores were moderately (r=0.3-0.49) or strongly (r≥0.5) correlated with simple measures of satisfaction with the cat, attachment, bond quality, and the extent to which the cat was perceived to be troublesome. The results suggest that, with further validation, this scale could be used to provide a simple, tick-box, assessment of an owner's perceptions regarding a cat's personality. This may be of value in both applied and research settings. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Dimensions of integration, continuity and longitudinality in clinical clerkships.

    PubMed

    Ellaway, Rachel H; Graves, Lisa; Cummings, Beth-Ann

    2016-09-01

    Over the past few decades, longitudinal integrated clerkships (LICs) have been proposed to address many perceived short-coming of traditional block clerkships. This growing interest in LICs has raised broader questions regarding the role of integration, continuity and longitudinality in medical education. A study with complementary theoretical and empirical dimensions was conducted to derive a more precise way of defining these three underlying concepts within the design of medical education curricula. The theoretical dimension involved a thematic review of the literature on integration, continuity and longitudinality in medical education. The empirical dimension surveyed all 17 Canadian medical schools on how they have operationalised integration, continuity and longitudinality in their undergraduate programmes. The two dimensions were iteratively synthesised to explore the meaning and expression of integration, continuity and longitudinality in medical education curriculum design. Integration, continuity and longitudinality were expressed in many ways and forms, including: integration of clinical disciplines, combined horizontal integration and vertical integration, and programme-level integration. Types of continuity included: continuity of patients, continuity of teaching, continuity of location and peer continuity. Longitudinality focused on connected or repeating episodes of training or on connecting activities, such as encounter logging across educational episodes. Twelve of the 17 schools were running an LIC of some kind, although only one school had a mandatory LIC experience. An ordinal scale of uses of integration, continuity and longitudinality during clerkships was developed, and new definitions of these concepts in the clerkship context were generated. Different clerkship designs embodied different forms and levels of integration, continuity and longitudinality. A dichotomous view of LICs and rotation-based clerkships was found not to represent current practices in Canada, which instead tended to fall along a continuum of integration, continuity and longitudinality. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  10. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less

  11. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction.

    PubMed

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A

    2016-04-01

    The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.

  12. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582

  13. Inflation from extra dimensions

    NASA Astrophysics Data System (ADS)

    Levin, Janna J.

    1995-02-01

    A gravity-driven inflation is shown to arise from a simple higher-dimensional universe. In vacuum, the shear of n > 1 contracting dimensions is able to inflate the remaining three spatial dimensions. Said another way, the expansion of the 3-volume is accelerated by the contraction of the n-volume. Upon dimensional reduction, the theory is equivalent to a four-dimensional cosmology with a dynamical Planck mass. A connection can therefore be made to recent examples of inflation powered by a dilaton kinetic energy. Unfortunately, the graceful exit problem encountered in dilaton cosmologies will haunt this cosmology as well.

  14. Application of ECH to the study of transport in ITER baseline scenario-like discharges in DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinsker, R. I.; Austin, M. E.; Ernst, D. R.

    Recent DIII-D experiments in the ITER Baseline Scenario (IBS) have shown strong increases in fluctuations and correlated reduction of confinement associated with entering the electron-heating-dominated regime with strong electron cyclotron heating (ECH). The addition of 3.2 MW of 110 GHz EC power deposited at ρ~0.42 to IBS discharges with ~3 MW of neutral beam injection causes large increases in low-k and medium-k turbulent density fluctuations observed with Doppler backscatter (DBS), beam emission spectroscopy (BES) and phase-contrast imaging (PCI) diagnostics, correlated with decreases in the energy, particle, and momentum confinement times. Power balance calculations show the electron heat diffusivity χ emore » increases significantly in the mid-radius region 0.4« less

  15. Application of ECH to the study of transport in ITER baseline scenario-like discharges in DIII-D

    DOE PAGES

    Pinsker, R. I.; Austin, M. E.; Ernst, D. R.; ...

    2015-03-12

    Recent DIII-D experiments in the ITER Baseline Scenario (IBS) have shown strong increases in fluctuations and correlated reduction of confinement associated with entering the electron-heating-dominated regime with strong electron cyclotron heating (ECH). The addition of 3.2 MW of 110 GHz EC power deposited at ρ~0.42 to IBS discharges with ~3 MW of neutral beam injection causes large increases in low-k and medium-k turbulent density fluctuations observed with Doppler backscatter (DBS), beam emission spectroscopy (BES) and phase-contrast imaging (PCI) diagnostics, correlated with decreases in the energy, particle, and momentum confinement times. Power balance calculations show the electron heat diffusivity χ emore » increases significantly in the mid-radius region 0.4« less

  16. Development of Elderly Quality of Life Index – Eqoli: Item Reduction and Distribution into Dimensions

    PubMed Central

    Paschoal, Sérgio Márcio Pacheco; Filho, Wilson Jacob; Litvoc, Júlio

    2008-01-01

    OBJECTIVE To describe item reduction and its distribution into dimensions in the construction process of a quality of life evaluation instrument for the elderly. METHODS The sampling method was chosen by convenience through quotas, with selection of elderly subjects from four programs to achieve heterogeneity in the “health status”, “functional capacity”, “gender”, and “age” variables. The Clinical Impact Method was used, consisting of the spontaneous and elicited selection by the respondents of relevant items to the construct Quality of Life in Old Age from a previously elaborated item pool. The respondents rated each item’s importance using a 5-point Likert scale. The product of the proportion of elderly selecting the item as relevant (frequency) and the mean importance score they attributed to it (importance) represented the overall impact of that item in their quality of life (impact). The items were ordered according to their impact scores and the top 46 scoring items were grouped in dimensions by three experts. A review of the negative items was performed. RESULTS One hundred and ninety three people (122 women and 71 men) were interviewed. Experts distributed the 46 items into eight dimensions. Closely related items were grouped and dimensions not reaching the minimum expected number of items received additional items resulting in eight dimensions and 43 items. DISCUSSION The sample was heterogeneous and similar to what was expected. The dimensions and items demonstrated the multidimensionality of the construct. The Clinical Impact Method was appropriate to construct the instrument, which was named Elderly Quality of Life Index - EQoLI. An accuracy process will be examined in the future. PMID:18438571

  17. A Method for Scheduling Air Traffic with Uncertain En Route Capacity Constraints

    NASA Technical Reports Server (NTRS)

    Arneson, Heather; Bloem, Michael

    2009-01-01

    A method for scheduling ground delay and airborne holding for flights scheduled to fly through airspace with uncertain capacity constraints is presented. The method iteratively solves linear programs for departure rates and airborne holding as new probabilistic information about future airspace constraints becomes available. The objective function is the expected value of the weighted sum of ground and airborne delay. In order to limit operationally costly changes to departure rates, they are updated only when such an update would lead to a significant cost reduction. Simulation results show a 13% cost reduction over a rough approximation of current practices. Comparison between the proposed as needed replanning method and a similar method that uses fixed frequency replanning shows a typical cost reduction of 1% to 2%, and even up to a 20% cost reduction in some cases.

  18. A path to stable low-torque plasma operation in ITER with test blanket modules

    NASA Astrophysics Data System (ADS)

    Lanctot, M. J.; Snipes, J. A.; Reimerdes, H.; Paz-Soldan, C.; Logan, N.; Hanson, J. M.; Buttery, R. J.; deGrassie, J. S.; Garofalo, A. M.; Gray, T. K.; Grierson, B. A.; King, J. D.; Kramer, G. J.; La Haye, R. J.; Pace, D. C.; Park, J.-K.; Salmi, A.; Shiraki, D.; Strait, E. J.; Solomon, W. M.; Tala, T.; Van Zeeland, M. A.

    2017-03-01

    New experiments in the low-torque ITER Q  =  10 scenario on DIII-D demonstrate that n  =  1 magnetic fields from a single row of ex-vessel control coils enable operation at ITER performance metrics in the presence of applied non-axisymmetric magnetic fields from a test blanket module (TBM) mock-up coil. With n  =  1 compensation, operation below the ITER-equivalent injected torque is successful at three times the ITER equivalent toroidal magnetic field ripple for a pair of TBMs in one equatorial port, whereas the uncompensated TBM field leads to rotation collapse, loss of H-mode and plasma current disruption. In companion experiments at high plasma beta, where the n  =  1 plasma response is enhanced, uncorrected TBM fields degrade energy confinement and the plasma angular momentum while increasing fast ion losses; however, disruptions are not routinely encountered owing to increased levels of injected neutral beam torque. In this regime, n  =  1 field compensation leads to recovery of a dominant fraction of the TBM-induced plasma pressure and rotation degradation, and an 80% reduction in the heat load to the first wall. These results show that the n  =  1 plasma response plays a dominant role in determining plasma stability, and that n  =  1 field compensation alone not only recovers most of the impact on plasma performance of the TBM, but also protects the first wall from potentially damaging heat flux. Despite these benefits, plasma rotation braking from the TBM fields cannot be fully recovered using standard error field control. Given the uncertainty in extrapolation of these results to the ITER configuration, it is prudent to design the TBMs with as low a ferromagnetic mass as possible without jeopardizing the TBM mission.

  19. Performance evaluation of iterative reconstruction algorithms for achieving CT radiation dose reduction — a phantom study

    PubMed Central

    Dodge, Cristina T.; Tamm, Eric P.; Cody, Dianna D.; Liu, Xinming; Jensen, Corey T.; Wei, Wei; Kundra, Vikas

    2016-01-01

    The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative reconstruction (ASiR), and model‐based iterative reconstruction (MBIR), over a range of typical to low‐dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat‐equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back‐projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low‐contrast detectability were evaluated from noise and contrast‐to‐noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were confirmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1 mGy. MBIR reduced noise levels five‐fold and increased CNR by a factor of five compared to FBP below 6 mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high‐contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial resolution for MBIR improved with increasing dose and pitch. Unlike FBP, MBIR and ASiR may have the potential for patient imaging at around 1 mGy CTDIvol. The improved low‐contrast detectability observed with MBIR, especially at low‐dose levels, indicate the potential for considerable dose reduction. PACS number(s): 87.57.Q‐, 87.57,nf, 87.57.C‐, 87.57.cj, 87.57.cf, 87.57.cm, 87.57.uq PMID:27074454

  20. Computational Aerodynamics with Icing Effects

    DTIC Science & Technology

    1993-05-01

    0.0 GO ITC :5 ENOiF O NOW THAT NEIGHBDP.S ARE IETFDDO THiE DIFFERENTIATIO3N 36 IF{I.EQ.:)THE-N SN = SMOINNI SY = SNP (NR,) IFýNS I)EIO.S;EQ. 3 THEN S...ev: vect-r do 7, -=:panmp),.pan(npý VvZnewlj; =v~n9’-f la~l- VF2s n j vfxnew~j:=vin!~--CoscsvxVr end: I OCMAX = OMECAWEN?’p OmEGAZ = OMEGA ( 3 ,KMP; .:FND...JSOFIC, 1MU COMMON! UNSTDY / OMEGA ) 3 ,.O). VFP)3.10), DTSTEP ctnm VISCOUS added for iteration with vi’srous data 1/216/93 ctnmr dimensions increased to

  1. An incremental block-line-Gauss-Seidel method for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Napolitano, M.; Walters, R. W.

    1985-01-01

    A block-line-Gauss-Seidel (LGS) method is developed for solving the incompressible and compressible Navier-Stokes equations in two dimensions. The method requires only one block-tridiagonal solution process per iteration and is consequently faster per step than the linearized block-ADI methods. Results are presented for both incompressible and compressible separated flows: in all cases the proposed block-LGS method is more efficient than the block-ADI methods. Furthermore, for high Reynolds number weakly separated incompressible flow in a channel, which proved to be an impossible task for a block-ADI method, solutions have been obtained very efficiently by the new scheme.

  2. Finite elements: Theory and application

    NASA Technical Reports Server (NTRS)

    Dwoyer, D. L. (Editor); Hussaini, M. Y. (Editor); Voigt, R. G. (Editor)

    1988-01-01

    Recent advances in FEM techniques and applications are discussed in reviews and reports presented at the ICASE/LaRC workshop held in Hampton, VA in July 1986. Topics addressed include FEM approaches for partial differential equations, mixed FEMs, singular FEMs, FEMs for hyperbolic systems, iterative methods for elliptic finite-element equations on general meshes, mathematical aspects of FEMS for incompressible viscous flows, and gradient weighted moving finite elements in two dimensions. Consideration is given to adaptive flux-corrected FEM transport techniques for CFD, mixed and singular finite elements and the field BEM, p and h-p versions of the FEM, transient analysis methods in computational dynamics, and FEMs for integrated flow/thermal/structural analysis.

  3. Tradespace and Affordability - Phase 1

    DTIC Science & Technology

    2013-07-09

    assessment options Cost-effectiveness, risk reduction leverage/ROI, rework avoidance Tool, data, scenario availability Contract Number: H98230-08-D-0171...Prepare FED assessment plans and earned value milestones Try to relate earned value to risk -exposure avoided rather than budgeted cost F. Begin...evaluate and iterate plans and enablers I. Assess readiness for Commitment Review Shortfalls identified as risks and covered by risk mitigation

  4. Does Iterative Reconstruction Lower CT Radiation Dose: Evaluation of 15,000 Examinations

    PubMed Central

    Noël, Peter B.; Renger, Bernhard; Fiebich, Martin; Münzel, Daniela; Fingerle, Alexander A.; Rummeny, Ernst J.; Dobritz, Martin

    2013-01-01

    Purpose Evaluation of 15,000 computed tomography (CT) examinations to investigate if iterative reconstruction (IR) reduces sustainably radiation exposure. Method and Materials Information from 15,000 CT examinations was collected, including all aspects of the exams such as scan parameter, patient information, and reconstruction instructions. The examinations were acquired between January 2010 and December 2012, while after 15 months a first generation IR algorithm was installed. To collect the necessary information from PACS, RIS, MPPS and structured reports a Dose Monitoring System was developed. To harvest all possible information an optical character recognition system was integrated, for example to collect information from the screenshot CT-dose report. The tool transfers all data to a database for further processing such as the calculation of effective dose and organ doses. To evaluate if IR provides a sustainable dose reduction, the effective dose values were statistically analyzed with respect to protocol type, diagnostic indication, and patient population. Results IR has the potential to reduce radiation dose significantly. Before clinical introduction of IR the average effective dose was 10.1±7.8mSv and with IR 8.9±7.1mSv (p*=0.01). Especially in CTA, with the possibility to use kV reduction protocols, such as in aortic CTAs (before IR: average14.2±7.8mSv; median11.4mSv /with IR:average9.9±7.4mSv; median7.4mSv), or pulmonary CTAs (before IR: average9.7±6.2mSV; median7.7mSv /with IR: average6.4±4.7mSv; median4.8mSv) the dose reduction effect is significant(p*=0.01). On the contrary for unenhanced low-dose scans of the cranial (for example sinuses) the reduction is not significant (before IR:average6.6±5.8mSv; median3.9mSv/with IR:average6.0±3.1mSV; median3.2mSv). Conclusion The dose aspect remains a priority in CT research. Iterative reconstruction algorithms reduce sustainably and significantly radiation dose in the clinical routine. Our results illustrate that not only in studies with a limited number of patients but also in the clinical routine, IRs provide long-term dose saving. PMID:24303035

  5. Does iterative reconstruction lower CT radiation dose: evaluation of 15,000 examinations.

    PubMed

    Noël, Peter B; Renger, Bernhard; Fiebich, Martin; Münzel, Daniela; Fingerle, Alexander A; Rummeny, Ernst J; Dobritz, Martin

    2013-01-01

    Evaluation of 15,000 computed tomography (CT) examinations to investigate if iterative reconstruction (IR) reduces sustainably radiation exposure. Information from 15,000 CT examinations was collected, including all aspects of the exams such as scan parameter, patient information, and reconstruction instructions. The examinations were acquired between January 2010 and December 2012, while after 15 months a first generation IR algorithm was installed. To collect the necessary information from PACS, RIS, MPPS and structured reports a Dose Monitoring System was developed. To harvest all possible information an optical character recognition system was integrated, for example to collect information from the screenshot CT-dose report. The tool transfers all data to a database for further processing such as the calculation of effective dose and organ doses. To evaluate if IR provides a sustainable dose reduction, the effective dose values were statistically analyzed with respect to protocol type, diagnostic indication, and patient population. IR has the potential to reduce radiation dose significantly. Before clinical introduction of IR the average effective dose was 10.1±7.8mSv and with IR 8.9±7.1mSv (p*=0.01). Especially in CTA, with the possibility to use kV reduction protocols, such as in aortic CTAs (before IR: average14.2±7.8mSv; median11.4mSv /with IR:average9.9±7.4mSv; median7.4mSv), or pulmonary CTAs (before IR: average9.7±6.2mSV; median7.7mSv /with IR: average6.4±4.7mSv; median4.8mSv) the dose reduction effect is significant(p*=0.01). On the contrary for unenhanced low-dose scans of the cranial (for example sinuses) the reduction is not significant (before IR:average6.6±5.8mSv; median3.9mSv/with IR:average6.0±3.1mSV; median3.2mSv). The dose aspect remains a priority in CT research. Iterative reconstruction algorithms reduce sustainably and significantly radiation dose in the clinical routine. Our results illustrate that not only in studies with a limited number of patients but also in the clinical routine, IRs provide long-term dose saving.

  6. A contracting-interval program for the Danilewski method. Ph.D. Thesis - Va. Univ.

    NASA Technical Reports Server (NTRS)

    Harris, J. D.

    1971-01-01

    The concept of contracting-interval programs is applied to finding the eigenvalues of a matrix. The development is a three-step process in which (1) a program is developed for the reduction of a matrix to Hessenberg form, (2) a program is developed for the reduction of a Hessenberg matrix to colleague form, and (3) the characteristic polynomial with interval coefficients is readily obtained from the interval of colleague matrices. This interval polynomial is then factored into quadratic factors so that the eigenvalues may be obtained. To develop a contracting-interval program for factoring this polynomial with interval coefficients it is necessary to have an iteration method which converges even in the presence of controlled rounding errors. A theorem is stated giving sufficient conditions for the convergence of Newton's method when both the function and its Jacobian cannot be evaluated exactly but errors can be made proportional to the square of the norm of the difference between the previous two iterates. This theorem is applied to prove the convergence of the generalization of the Newton-Bairstow method that is used to obtain quadratic factors of the characteristic polynomial.

  7. Disruption mitigation and avoidance at ASDEX Upgrade

    NASA Astrophysics Data System (ADS)

    Maraschek, M.; Pautasso, G.; Esposito, B.; Granucci, G.; Stober, J.; Treutterer, W.

    2009-11-01

    Disruptions are a major concern for tokamaks and in particular for ITER. They cause high heat loads during the thermal quench and high mechanical forces during the subsequent current quench. The generation and loss of runaway electrons (highly accelerated electrons carrying large fractions of the plasma current) can produce damage to the vessel structures. Therefore, schemes are implemented in present tokamaks to mitigate or to even avoid them. Mitigation has been proven to be effective through the injection of noble gases causing a reduction of the thermal heat load by radiation and a reduction of the mechanical forces. In addition 25% of the required density for the collisional suppression of runaways in ITER has been reached. For the trigger of the noble gas injection a locked mode detector is routinely used at ASDEX Upgrade. An extension to more complex precursors is planed. A different approach has been used for disruption avoidance by injecting ECRH triggered by the loop voltage increase before the disruption. The avoidance of an ongoing density limit disruption has been achieved when the ECRH is deposited at resonant surfaces where MHD modes, such as the m=2/n=1, occur. Present schemes for the mitigation and eventually avoidance of disruptions will be discussed.

  8. Optimized scheduling technique of null subcarriers for peak power control in 3GPP LTE downlink.

    PubMed

    Cho, Soobum; Park, Sang Kyu

    2014-01-01

    Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system.

  9. Optimized Scheduling Technique of Null Subcarriers for Peak Power Control in 3GPP LTE Downlink

    PubMed Central

    Park, Sang Kyu

    2014-01-01

    Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system. PMID:24883376

  10. Critical behavior and dimension crossover of pion superfluidity

    NASA Astrophysics Data System (ADS)

    Wang, Ziyue; Zhuang, Pengfei

    2016-09-01

    We investigate the critical behavior of pion superfluidity in the framework of the functional renormalization group (FRG). By solving the flow equations in the SU(2) linear sigma model at finite temperature and isospin density, and making comparison with the fixed point analysis of a general O (N ) system with continuous dimension, we find that the pion superfluidity is a second order phase transition subject to an O (2 ) universality class with a dimension crossover from dc=4 to dc=3 . This phenomenon provides a concrete example of dimension reduction in thermal field theory. The large-N expansion gives a temperature independent critical exponent β and agrees with the FRG result only at zero temperature.

  11. Spatiotemporal Interpolation for Environmental Modelling

    PubMed Central

    Susanto, Ferry; de Souza, Paulo; He, Jing

    2016-01-01

    A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497

  12. Dimension Reduction of Hyperspectral Data on Beowulf Clusters

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek

    2000-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operation. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold a great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, which is used widely in remote sensing, is the Principal Components Analysis (PCA). In light of the growing number of spectral channels of modern instruments, the paper reports on the development of a parallel PCA and its implementation on two Beowulf cluster configurations, on with fast Ethernet switch and the other is with a Myrinet interconnection.

  13. Truncated Painlevé expansion: Tanh-traveling wave solutions and reduction of sine-Poisson equation to a quadrature for stationary and nonstationary three-dimensional collisionless cold plasma

    NASA Astrophysics Data System (ADS)

    Ibrahim, R. S.; El-Kalaawy, O. H.

    2006-10-01

    The relativistic nonlinear self-consistent equations for a collisionless cold plasma with stationary ions [R. S. Ibrahim, IMA J. Appl. Math. 68, 523 (2003)] are extended to 3 and 3+1 dimensions. The resulting system of equations is reduced to the sine-Poisson equation. The truncated Painlevé expansion and reduction of the partial differential equation to a quadrature problem (RQ method) are described and applied to obtain the traveling wave solutions of the sine-Poisson equation for stationary and nonstationary equations in 3 and 3+1 dimensions describing the charge-density equilibrium configuration model.

  14. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  15. Computed inverse MRI for magnetic susceptibility map reconstruction

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2015-01-01

    Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372

  16. A dimension reduction strategy for improving the efficiency of computer-aided detection for CT colonography

    NASA Astrophysics Data System (ADS)

    Song, Bowen; Zhang, Guopeng; Wang, Huafeng; Zhu, Wei; Liang, Zhengrong

    2013-02-01

    Various types of features, e.g., geometric features, texture features, projection features etc., have been introduced for polyp detection and differentiation tasks via computer aided detection and diagnosis (CAD) for computed tomography colonography (CTC). Although these features together cover more information of the data, some of them are statistically highly-related to others, which made the feature set redundant and burdened the computation task of CAD. In this paper, we proposed a new dimension reduction method which combines hierarchical clustering and principal component analysis (PCA) for false positives (FPs) reduction task. First, we group all the features based on their similarity using hierarchical clustering, and then PCA is employed within each group. Different numbers of principal components are selected from each group to form the final feature set. Support vector machine is used to perform the classification. The results show that when three principal components were chosen from each group we can achieve an area under the curve of receiver operating characteristics of 0.905, which is as high as the original dataset. Meanwhile, the computation time is reduced by 70% and the feature set size is reduce by 77%. It can be concluded that the proposed method captures the most important information of the feature set and the classification accuracy is not affected after the dimension reduction. The result is promising and further investigation, such as automatically threshold setting, are worthwhile and are under progress.

  17. Reduced Radiation Dose with Model-based Iterative Reconstruction versus Standard Dose with Adaptive Statistical Iterative Reconstruction in Abdominal CT for Diagnosis of Acute Renal Colic.

    PubMed

    Fontarensky, Mikael; Alfidja, Agaïcha; Perignon, Renan; Schoenig, Arnaud; Perrier, Christophe; Mulliez, Aurélien; Guy, Laurent; Boyer, Louis

    2015-07-01

    To evaluate the accuracy of reduced-dose abdominal computed tomographic (CT) imaging by using a new generation model-based iterative reconstruction (MBIR) to diagnose acute renal colic compared with a standard-dose abdominal CT with 50% adaptive statistical iterative reconstruction (ASIR). This institutional review board-approved prospective study included 118 patients with symptoms of acute renal colic who underwent the following two successive CT examinations: standard-dose ASIR 50% and reduced-dose MBIR. Two radiologists independently reviewed both CT examinations for presence or absence of renal calculi, differential diagnoses, and associated abnormalities. The imaging findings, radiation dose estimates, and image quality of the two CT reconstruction methods were compared. Concordance was evaluated by κ coefficient, and descriptive statistics and t test were used for statistical analysis. Intraobserver correlation was 100% for the diagnosis of renal calculi (κ = 1). Renal calculus (τ = 98.7%; κ = 0.97) and obstructive upper urinary tract disease (τ = 98.16%; κ = 0.95) were detected, and differential or alternative diagnosis was performed (τ = 98.87% κ = 0.95). MBIR allowed a dose reduction of 84% versus standard-dose ASIR 50% (mean volume CT dose index, 1.7 mGy ± 0.8 [standard deviation] vs 10.9 mGy ± 4.6; mean size-specific dose estimate, 2.2 mGy ± 0.7 vs 13.7 mGy ± 3.9; P < .001) without a conspicuous deterioration in image quality (reduced-dose MBIR vs ASIR 50% mean scores, 3.83 ± 0.49 vs 3.92 ± 0.27, respectively; P = .32) or increase in noise (reduced-dose MBIR vs ASIR 50% mean, respectively, 18.36 HU ± 2.53 vs 17.40 HU ± 3.42). Its main drawback remains the long time required for reconstruction (mean, 40 minutes). A reduced-dose protocol with MBIR allowed a dose reduction of 84% without increasing noise and without an conspicuous deterioration in image quality in patients suspected of having renal colic.

  18. WE-AB-207A-08: BEST IN PHYSICS (IMAGING): Advanced Scatter Correction and Iterative Reconstruction for Improved Cone-Beam CT Imaging On the TrueBeam Radiotherapy Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, A; Paysan, P; Brehm, M

    2016-06-15

    Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less

  19. Metallographic autopsies of full-scale ITER prototype cable-in-conduit conductors after full testing in SULTAN: 1. The mechanical role of copper strands in a CICC

    DOE PAGES

    Sanabria, Carlos; Lee, Peter J.; Starch, William; ...

    2015-06-22

    Cables made with Nb 3Sn-based superconductor strands will provide the 13 T maximum peak magnetic field of the ITER Central Solenoid (CS) coils and they must survive up to 60,000 electromagnetic cycles. Accordingly, prototype designs of CS cable-in-conduit-conductors (CICC) were electromagnetically tested over multiple magnetic field cycles and warm-up-cool-down scenarios in the SULTAN facility at CRPP. We report here a post mortem metallographic analysis of two CS CICC prototypes which exhibited some rate of irreversible performance degradation during cycling. The standard ITER CS CICC cable design uses a combination of superconducting and Cu strands, and because the Lorentz force onmore » the strand is proportional to the transport current in the strand, removing the copper strands (while increasing the Cu:SC ratio of the superconducting strands) was proposed as one way of reducing the strand load. In this study we compare the two alternative CICCs, with and without Cu strands, keeping in mind that the degradation after SULTAN test was lower for the CICC without Cu strands. The post mortem metallographic evaluation revealed that the overall strand transverse movement was 20% lower in the CICC without Cu strands and that the tensile filament fractures found were less, both indications of an overall reduction in high tensile strain regions. Furthermore, it was interesting to see that the Cu strands in the mixed cable design (with higher degradation) helped reduce the contact stresses on the high pressure side of the CICC, but in either case, the strain reduction mechanisms were not enough to suppress cyclic degradation. Advantages and disadvantages of each conductor design are discussed here aimed to understand the sources of the degradation.« less

  20. Initial phantom study comparing image quality in computed tomography using adaptive statistical iterative reconstruction and new adaptive statistical iterative reconstruction v.

    PubMed

    Lim, Kyungjae; Kwon, Heejin; Cho, Jinhan; Oh, Jongyoung; Yoon, Seongkuk; Kang, Myungjin; Ha, Dongho; Lee, Jinhwa; Kang, Eunju

    2015-01-01

    The purpose of this study was to assess the image quality of a novel advanced iterative reconstruction (IR) method called as "adaptive statistical IR V" (ASIR-V) by comparing the image noise, contrast-to-noise ratio (CNR), and spatial resolution from those of filtered back projection (FBP) and adaptive statistical IR (ASIR) on computed tomography (CT) phantom image. We performed CT scans at 5 different tube currents (50, 70, 100, 150, and 200 mA) using 3 types of CT phantoms. Scanned images were subsequently reconstructed in 7 different scan settings, such as FBP, and 3 levels of ASIR and ASIR-V (30%, 50%, and 70%). The image noise was measured in the first study using body phantom. The CNR was measured in the second study using contrast phantom and the spatial resolutions were measured in the third study using a high-resolution phantom. We compared the image noise, CNR, and spatial resolution among the 7 reconstructed image scan settings to determine whether noise reduction, high CNR, and high spatial resolution could be achieved at ASIR-V. At quantitative analysis of the first and second studies, it showed that the images reconstructed using ASIR-V had reduced image noise and improved CNR compared with those of FBP and ASIR (P < 0.001). At qualitative analysis of the third study, it also showed that the images reconstructed using ASIR-V had significantly improved spatial resolution than those of FBP and ASIR (P < 0.001). Our phantom studies showed that ASIR-V provides a significant reduction in image noise and a significant improvement in CNR as well as spatial resolution. Therefore, this technique has the potential to reduce the radiation dose further without compromising image quality.

  1. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    PubMed Central

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective. PMID:25207870

  2. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    PubMed

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-09-09

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective.

  3. Reduction theorems for optimal unambiguous state discrimination of density matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raynal, Philippe; Luetkenhaus, Norbert; Enk, Steven J. van

    2003-08-01

    We present reduction theorems for the problem of optimal unambiguous state discrimination of two general density matrices. We show that this problem can be reduced to that of two density matrices that have the same rank n and are described in a Hilbert space of dimensions 2n. We also show how to use the reduction theorems to discriminate unambiguously between N mixed states (N{>=}2)

  4. Computational Investigation of a Boundary-Layer Ingesting Propulsion System for the Common Research Model

    NASA Technical Reports Server (NTRS)

    Blumenthal, Brennan T.; Elmiligui, Alaa; Geiselhart, Karl A.; Campbell, Richard L.; Maughmer, Mark D.; Schmitz, Sven

    2016-01-01

    The present paper examines potential propulsive and aerodynamic benefits of integrating a Boundary-Layer Ingestion (BLI) propulsion system into a typical commercial aircraft using the Common Research Model (CRM) geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment is used to generate engine conditions for CFD analysis. Improvements to the BLI geometry are made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method, and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2 deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.4% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from Boundary-Layer Ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.

  5. Computational Investigation of a Boundary-Layer Ingestion Propulsion System for the Common Research Model

    NASA Technical Reports Server (NTRS)

    Blumenthal, Brennan

    2016-01-01

    This thesis will examine potential propulsive and aerodynamic benefits of integrating a boundary-layer ingestion (BLI) propulsion system with a typical commercial aircraft using the Common Research Model geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment will be used to generate engine conditions for CFD analysis. Improvements to the BLI geometry will be made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.3% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from boundary-layer ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.

  6. LOW-ENGINE-FRICTION TECHNOLOGY FOR ADVANCED NATURAL-GAS RECIPROCATING ENGINES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Victor Wong; Tian Tian; Luke Moughon

    2005-09-30

    This program aims at improving the efficiency of advanced natural-gas reciprocating engines (ANGRE) by reducing piston and piston ring assembly friction without major adverse effects on engine performance, such as increased oil consumption and wear. An iterative process of simulation, experimentation and analysis is being followed towards achieving the goal of demonstrating a complete optimized low-friction engine system. To date, a detailed set of piston and piston-ring dynamic and friction models have been developed and applied that illustrate the fundamental relationships between design parameters and friction losses. Low friction ring designs have already been recommended in a previous phase, withmore » full-scale engine validation partially completed. Current accomplishments include the addition of several additional power cylinder design areas to the overall system analysis. These include analyses of lubricant and cylinder surface finish and a parametric study of piston design. The Waukesha engine was found to be already well optimized in the areas of lubricant, surface skewness and honing cross-hatch angle, where friction reductions of 12% for lubricant, and 5% for surface characteristics, are projected. For the piston, a friction reduction of up to 50% may be possible by controlling waviness alone, while additional friction reductions are expected when other parameters are optimized. A total power cylinder friction reduction of 30-50% is expected, translating to an engine efficiency increase of two percentage points from its current baseline towards the goal of 50% efficiency. Key elements of the continuing work include further analysis and optimization of the engine piston design, in-engine testing of recommended lubricant and surface designs, design iteration and optimization of previously recommended technologies, and full-engine testing of a complete, optimized, low-friction power cylinder system.« less

  7. CT dose reduction using Automatic Exposure Control and iterative reconstruction: A chest paediatric phantoms study.

    PubMed

    Greffier, Joël; Pereira, Fabricio; Macri, Francesco; Beregi, Jean-Paul; Larbi, Ahmed

    2016-04-01

    To evaluate the impact of Automatic Exposure Control (AEC) on radiation dose and image quality in paediatric chest scans (MDCT), with or without iterative reconstruction (IR). Three anthropomorphic phantoms representing children aged one, five and 10-year-old were explored using AEC system (CARE Dose 4D) with five modulation strength options. For each phantom, six acquisitions were carried out: one with fixed mAs (without AEC) and five each with different modulation strength. Raw data were reconstructed with Filtered Back Projection (FBP) and with two distinct levels of IR using soft and strong kernels. Dose reduction and image quality indices (Noise, SNR, CNR) were measured in lung and soft tissues. Noise Power Spectrum (NPS) was evaluated with a Catphan 600 phantom. The use of AEC produced a significant dose reduction (p<0.01) for all anthropomorphic sizes employed. According to the modulation strength applied, dose delivered was reduced from 43% to 91%. This pattern led to significantly increased noise (p<0.01) and reduced SNR and CNR (p<0.01). However, IR was able to improve these indices. The use of AEC/IR preserved image quality indices with a lower dose delivered. Doses were reduced from 39% to 58% for the one-year-old phantom, from 46% to 63% for the five-year-old phantom, and from 58% to 74% for the 10-year-old phantom. In addition, AEC/IR changed the patterns of NPS curves in amplitude and in spatial frequency. In chest paediatric MDCT, the use of AEC with IR allows one to obtain a significant dose reduction while maintaining constant image quality indices. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Imaging of Arthroplasties: Improved Image Quality and Lesion Detection With Iterative Metal Artifact Reduction, a New CT Metal Artifact Reduction Technique.

    PubMed

    Subhas, Naveen; Polster, Joshua M; Obuchowski, Nancy A; Primak, Andrew N; Dong, Frank F; Herts, Brian R; Iannotti, Joseph P

    2016-08-01

    The purpose of this study was to compare iterative metal artifact reduction (iMAR), a new single-energy metal artifact reduction technique, with filtered back projection (FBP) in terms of attenuation values, qualitative image quality, and streak artifacts near shoulder and hip arthroplasties and observer ability with these techniques to detect pathologic lesions near an arthroplasty in a phantom model. Preoperative and postoperative CT scans of 40 shoulder and 21 hip arthroplasties were reviewed. All postoperative scans were obtained using the same technique (140 kVp, 300 quality reference mAs, 128 × 0.6 mm detector collimation) on one of three CT scanners and reconstructed with FBP and iMAR. The attenuation differences in bones and soft tissues between preoperative and postoperative scans at the same location were compared; image quality and streak artifact for both reconstructions were qualitatively graded by two blinded readers. Observer ability and confidence to detect lesions near an arthroplasty in a phantom model were graded. For both readers, iMAR had more accurate attenuation values (p < 0.001), qualitatively better image quality (p < 0.001), and less streak artifact (p < 0.001) in all locations near arthroplasties compared with FBP. Both readers detected more lesions (p ≤ 0.04) with higher confidence (p ≤ 0.01) with iMAR than with FBP in the phantom model. The iMAR technique provided more accurate attenuation values, better image quality, and less streak artifact near hip and shoulder arthroplasties than FBP; iMAR also increased observer ability and confidence to detect pathologic lesions near arthroplasties in a phantom model.

  9. Head-to-head comparison of adaptive statistical and model-based iterative reconstruction algorithms for submillisievert coronary CT angiography.

    PubMed

    Benz, Dominik C; Fuchs, Tobias A; Gräni, Christoph; Studer Bruengger, Annina A; Clerc, Olivier F; Mikulicic, Fran; Messerli, Michael; Stehli, Julia; Possner, Mathias; Pazhenkottil, Aju P; Gaemperli, Oliver; Kaufmann, Philipp A; Buechel, Ronny R

    2018-02-01

    Iterative reconstruction (IR) algorithms allow for a significant reduction in radiation dose of coronary computed tomography angiography (CCTA). We performed a head-to-head comparison of adaptive statistical IR (ASiR) and model-based IR (MBIR) algorithms to assess their impact on quantitative image parameters and diagnostic accuracy for submillisievert CCTA. CCTA datasets of 91 patients were reconstructed using filtered back projection (FBP), increasing contributions of ASiR (20, 40, 60, 80, and 100%), and MBIR. Signal and noise were measured in the aortic root to calculate signal-to-noise ratio (SNR). In a subgroup of 36 patients, diagnostic accuracy of ASiR 40%, ASiR 100%, and MBIR for diagnosis of coronary artery disease (CAD) was compared with invasive coronary angiography. Median radiation dose was 0.21 mSv for CCTA. While increasing levels of ASiR gradually reduced image noise compared with FBP (up to - 48%, P < 0.001), MBIR provided largest noise reduction (-79% compared with FBP) outperforming ASiR (-59% compared with ASiR 100%; P < 0.001). Increased noise and lower SNR with ASiR 40% and ASiR 100% resulted in substantially lower diagnostic accuracy to detect CAD as diagnosed by invasive coronary angiography compared with MBIR: sensitivity and specificity were 100 and 37%, 100 and 57%, and 100 and 74% for ASiR 40%, ASiR 100%, and MBIR, respectively. MBIR offers substantial noise reduction with increased SNR, paving the way for implementation of submillisievert CCTA protocols in clinical routine. In contrast, inferior noise reduction by ASiR negatively affects diagnostic accuracy of submillisievert CCTA for CAD detection. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.

  10. Adaptive statistical iterative reconstruction and bismuth shielding for evaluation of dose reduction to the eye and image quality during head CT

    NASA Astrophysics Data System (ADS)

    Kim, Myeong Seong; Choi, Jiwon; Kim, Sun Young; Kweon, Dae Cheol

    2014-03-01

    There is a concern regarding the adverse effects of increasing radiation doses due to repeated computed tomography (CT) scans, especially in radiosensitive organs and portions thereof, such as the lenses of the eyes. Bismuth shielding with an adaptive statistical iterative reconstruction (ASIR) algorithm was recently introduced in our clinic as a method to reduce the absorbed radiation dose. This technique was applied to the lens of the eye during CT scans. The purpose of this study was to evaluate the reduction in the absorbed radiation dose and to determine the noise level when using bismuth shielding and the ASIR algorithm with the GE DC 750 HD 64-channel CT scanner for CT of the head of a humanoid phantom. With the use of bismuth shielding, the noise level was higher in the beam-hardening artifact areas than in the revealed artifact areas. However, with the use of ASIR, the noise level was lower than that with the use of bismuth alone; it was also lower in the artifact areas. The reduction in the radiation dose with the use of bismuth was greatest at the surface of the phantom to a limited depth. In conclusion, it is possible to reduce the radiation level and slightly decrease the bismuth-induced noise level by using a combination of ASIR as an algorithm process and bismuth as an in-plane hardware-type shielding method.

  11. Prospective iterative trial of proteasome inhibitor-based desensitization.

    PubMed

    Woodle, E S; Shields, A R; Ejaz, N S; Sadaka, B; Girnita, A; Walsh, R C; Alloway, R R; Brailey, P; Cardi, M A; Abu Jawdeh, B G; Roy-Chaudhury, P; Govil, A; Mogilishetty, G

    2015-01-01

    A prospective iterative trial of proteasome inhibitor (PI)-based therapy for reducing HLA antibody (Ab) levels was conducted in five phases differing in bortezomib dosing density and plasmapheresis timing. Phases included 1 or 2 bortezomib cycles (1.3 mg/m(2) × 6-8 doses), one rituximab dose and plasmapheresis. HLA Abs were measured by solid phase and flow cytometry (FCM) assays. Immunodominant Ab (iAb) was defined as highest HLA Ab level. Forty-four patients received 52 desensitization courses (7 patients enrolled in multiple phases): Phase 1 (n = 20), Phase 2 (n = 12), Phase 3 (n = 10), Phase 4 (n = 5), Phase 5 (n = 5). iAb reductions were observed in 38 of 44 (86%) patients and persisted up to 10 months. In Phase 1, a 51.5% iAb reduction was observed at 28 days with bortezomib alone. iAb reductions increased with higher bortezomib dosing densities and included class I, II, and public antigens (HLA DRβ3, HLA DRβ4 and HLA DRβ5). FCM median channel shifts decreased in 11/11 (100%) patients by a mean of 103 ± 54 mean channel shifts (log scale). Nineteen out of 44 patients (43.2%) were transplanted with low acute rejection rates (18.8%) and de novo DSA formation (12.5%). In conclusion, PI-based desensitization consistently and durably reduces HLA Ab levels providing an alternative to intravenous immune globulin-based desensitization. © Copyright 2014 The American Society of Transplantation and the American Society of Transplant Surgeons.

  12. Statistical iterative reconstruction for streak artefact reduction when using multidetector CT to image the dento-alveolar structures.

    PubMed

    Dong, J; Hayakawa, Y; Kober, C

    2014-01-01

    When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.

  13. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.

    PubMed

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.

  14. Partial Least Squares for Discrimination in fMRI Data

    PubMed Central

    Andersen, Anders H.; Rayens, William S.; Liu, Yushu; Smith, Charles D.

    2011-01-01

    Multivariate methods for discrimination were used in the comparison of brain activation patterns between groups of cognitively normal women who are at either high or low Alzheimer's disease risk based on family history and apolipoprotein-E4 status. Linear discriminant analysis (LDA) was preceded by dimension reduction using either principal component analysis (PCA), partial least squares (PLS), or a new oriented partial least squares (OrPLS) method. The aim was to identify a spatial pattern of functionally connected brain regions that was differentially expressed by the risk groups and yielded optimal classification accuracy. Multivariate dimension reduction is required prior to LDA when the data contains more feature variables than there are observations on individual subjects. Whereas PCA has been commonly used to identify covariance patterns in neuroimaging data, this approach only identifies gross variability and is not capable of distinguishing among-groups from within-groups variability. PLS and OrPLS provide a more focused dimension reduction by incorporating information on class structure and therefore lead to more parsimonious models for discrimination. Performance was evaluated in terms of the cross-validated misclassification rates. The results support the potential of using fMRI as an imaging biomarker or diagnostic tool to discriminate individuals with disease or high risk. PMID:22227352

  15. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  16. Nuclear patterns of human breast cancer cells during apoptosis: characterisation by fractal dimension and co-occurrence matrix statistics.

    PubMed

    Losa, Gabriele A; Castelli, Christian

    2005-11-01

    An analytical strategy combining fractal geometry and grey-level co-occurrence matrix (GLCM) statistics was devised to investigate ultrastructural changes in oestrogen-insensitive SK-BR3 human breast cancer cells undergoing apoptosis in vitro. Apoptosis was induced by 1 microM calcimycin (A23187 Ca(2+) ionophore) and assessed by measuring conventional cellular parameters during the culture period. SK-BR3 cells entered the early stage of apoptosis within 24 h of treatment with calcimycin, which induced detectable changes in nuclear components, as documented by increased values of most GLCM parameters and by the general reduction of the fractal dimensions. In these affected cells, morphonuclear traits were accompanied by the reduction of distinct gangliosides and loss of unidentifiable glycolipid molecules at the cell surface. All these changes were shown to be involved in apoptosis before the detection of conventional markers, which were only measurable during the active phases of apoptotic cell death. In overtly apoptotic cells treated with 1 microM calcimycin for 72 h, most nuclear components underwent dramatic ultrastructural changes, including marginalisation and condensation of chromatin, as reflected in a significant reduction of their fractal dimensions. Hence, both fractal and GLCM analyses confirm that the morphological reorganisation of nuclei, attributable to a loss of structural complexity, occurs early in apoptosis.

  17. Synthetic dimensions for cold atoms from shaking a harmonic trap

    NASA Astrophysics Data System (ADS)

    Price, Hannah M.; Ozawa, Tomoki; Goldman, Nathan

    2017-02-01

    We introduce a simple scheme to implement synthetic dimensions in ultracold atomic gases, which only requires two basic and ubiquitous ingredients: the harmonic trap, which confines the atoms, combined with a periodic shaking. In our approach, standard harmonic oscillator eigenstates are reinterpreted as lattice sites along a synthetic dimension, while the coupling between these lattice sites is controlled by the applied time modulation. The phase of this modulation enters as a complex hopping phase, leading straightforwardly to an artificial magnetic field upon adding a second dimension. We show that this artificial gauge field has important consequences, such as the counterintuitive reduction of average energy under resonant driving, or the realization of quantum Hall physics. Our approach offers significant advantages over previous implementations of synthetic dimensions, providing an intriguing route towards higher-dimensional topological physics and strongly-correlated states.

  18. Effects of combined dimension reduction and tabulation on the simulations of a turbulent premixed flame using a large-eddy simulation/probability density function method

    NASA Astrophysics Data System (ADS)

    Kim, Jeonglae; Pope, Stephen B.

    2014-05-01

    A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.

  19. Integrating Dimension Reduction and Out-of-Sample Extension in Automated Classification of Ex Vivo Human Patellar Cartilage on Phase Contrast X-Ray Computed Tomography

    PubMed Central

    Nagarajan, Mahesh B.; Coan, Paola; Huber, Markus B.; Diemoz, Paul C.; Wismüller, Axel

    2015-01-01

    Phase contrast X-ray computed tomography (PCI-CT) has been demonstrated as a novel imaging technique that can visualize human cartilage with high spatial resolution and soft tissue contrast. Different textural approaches have been previously investigated for characterizing chondrocyte organization on PCI-CT to enable classification of healthy and osteoarthritic cartilage. However, the large size of feature sets extracted in such studies motivates an investigation into algorithmic feature reduction for computing efficient feature representations without compromising their discriminatory power. For this purpose, geometrical feature sets derived from the scaling index method (SIM) were extracted from 1392 volumes of interest (VOI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. The extracted feature sets were subject to linear and non-linear dimension reduction techniques as well as feature selection based on evaluation of mutual information criteria. The reduced feature set was subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Our results show that the classification performance achieved by 9-D SIM-derived geometric feature sets (AUC: 0.96 ± 0.02) can be maintained with 2-D representations computed from both dimension reduction and feature selection (AUC values as high as 0.97 ± 0.02). Thus, such feature reduction techniques can offer a high degree of compaction to large feature sets extracted from PCI-CT images while maintaining their ability to characterize the underlying chondrocyte patterns. PMID:25710875

  20. The Challenge of Universal Primary Education: Strategies for Achieving the International Development Targets.

    ERIC Educational Resources Information Center

    Department for International Development, London (England).

    The Department for International Development (DFID) is the British government department responsible for promoting development and the reduction of poverty in sites in developing and transition countries around the world. This paper focuses on the education dimension of poverty reduction, and specifically the attainment of the International…

  1. Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed

    NASA Astrophysics Data System (ADS)

    Arif, N.; Danoedoro, P.; Hartono

    2017-12-01

    Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.

  2. From virtual clustering analysis to self-consistent clustering analysis: a mathematical study

    NASA Astrophysics Data System (ADS)

    Tang, Shaoqiang; Zhang, Lei; Liu, Wing Kam

    2018-03-01

    In this paper, we propose a new homogenization algorithm, virtual clustering analysis (VCA), as well as provide a mathematical framework for the recently proposed self-consistent clustering analysis (SCA) (Liu et al. in Comput Methods Appl Mech Eng 306:319-341, 2016). In the mathematical theory, we clarify the key assumptions and ideas of VCA and SCA, and derive the continuous and discrete Lippmann-Schwinger equations. Based on a key postulation of "once response similarly, always response similarly", clustering is performed in an offline stage by machine learning techniques (k-means and SOM), and facilitates substantial reduction of computational complexity in an online predictive stage. The clear mathematical setup allows for the first time a convergence study of clustering refinement in one space dimension. Convergence is proved rigorously, and found to be of second order from numerical investigations. Furthermore, we propose to suitably enlarge the domain in VCA, such that the boundary terms may be neglected in the Lippmann-Schwinger equation, by virtue of the Saint-Venant's principle. In contrast, they were not obtained in the original SCA paper, and we discover these terms may well be responsible for the numerical dependency on the choice of reference material property. Since VCA enhances the accuracy by overcoming the modeling error, and reduce the numerical cost by avoiding an outer loop iteration for attaining the material property consistency in SCA, its efficiency is expected even higher than the recently proposed SCA algorithm.

  3. Analysis-Driven Design Optimization of a SMA-Based Slat-Cove Filler for Aeroacoustic Noise Reduction

    NASA Technical Reports Server (NTRS)

    Scholten, William; Hartl, Darren; Turner, Travis

    2013-01-01

    Airframe noise is a significant component of environmental noise in the vicinity of airports. The noise associated with the leading-edge slat of typical transport aircraft is a prominent source of airframe noise. Previous work suggests that a slat-cove filler (SCF) may be an effective noise treatment. Hence, development and optimization of a practical slat-cove-filler structure is a priority. The objectives of this work are to optimize the design of a functioning SCF which incorporates superelastic shape memory alloy (SMA) materials as flexures that permit the deformations involved in the configuration change. The goal of the optimization is to minimize the actuation force needed to retract the slat-SCF assembly while satisfying constraints on the maximum SMA stress and on the SCF deflection under static aerodynamic pressure loads, while also satisfying the condition that the SCF self-deploy during slat extension. A finite element analysis model based on a physical bench-top model is created in Abaqus such that automated iterative analysis of the design could be performed. In order to achieve an optimized design, several design variables associated with the current SCF configuration are considered, such as the thicknesses of SMA flexures and the dimensions of various components, SMA and conventional. Designs of experiment (DOE) are performed to investigate structural response to an aerodynamic pressure load and to slat retraction and deployment. DOE results are then used to inform the optimization process, which determines a design minimizing actuator forces while satisfying the required constraints.

  4. Radiation dose reduction with chest computed tomography using adaptive statistical iterative reconstruction technique: initial experience.

    PubMed

    Prakash, Priyanka; Kalra, Mannudeep K; Digumarthy, Subba R; Hsieh, Jiang; Pien, Homer; Singh, Sarabjeet; Gilman, Matthew D; Shepard, Jo-Anne O

    2010-01-01

    To assess radiation dose reduction and image quality for weight-based chest computed tomographic (CT) examination results reconstructed using adaptive statistical iterative reconstruction (ASIR) technique. With local ethical committee approval, weight-adjusted chest CT examinations were performed using ASIR in 98 patients and filtered backprojection (FBP) in 54 weight-matched patients on a 64-slice multidetector CT. Patients were categorized into 3 groups: 60 kg or less (n = 32), 61 to 90 kg (n = 77), and 91 kg or more (n = 43) for weight-based adjustment of noise indices for automatic exposure control (Auto mA; GE Healthcare, Waukesha, Wis). Remaining scan parameters were held constant at 0.984:1 pitch, 120 kilovolts (peak), 40-mm table feed per rotation, and 2.5-mm section thickness. Patients' weight, scanning parameters, and CT dose index volume were recorded. Effective doses (EDs) were estimated. Image noise was measured in the descending thoracic aorta at the level of the carina. Data were analyzed using analysis of variance. Compared with FBP, ASIR was associated with an overall mean (SD) decrease of 27.6% in ED (ASIR, 8.8 [2.3] mSv; FBP, 12.2 [2.1] mSv; P < 0.0001). With the use of ASIR, the ED values were 6.5 (1.8) mSv (28.8% decrease), 7.3 (1.6) mSv (27.3% decrease), and 12.8 (2.3) mSv (26.8% decrease) for the weight groups of 60 kg or less, 61 to 90 kg, and 91 kg or more, respectively, compared with 9.2 (2.3) mSv, 10.0 (2.0) mSv, and 17.4 (2.1) mSv with FBP (P < 0.0001). Despite dose reduction, there was less noise with ASIR (12.6 [2.9] mSv) than with FBP (16.6 [6.2] mSv; P < 0.0001). Adaptive statistical iterative reconstruction helps reduce chest CT radiation dose and improve image quality compared with the conventionally used FBP image reconstruction.

  5. Advanced Software for Analysis of High-Speed Rolling-Element Bearings

    NASA Technical Reports Server (NTRS)

    Poplawski, J. V.; Rumbarger, J. H.; Peters, S. M.; Galatis, H.; Flower, R.

    2003-01-01

    COBRA-AHS is a package of advanced software for analysis of rigid or flexible shaft systems supported by rolling-element bearings operating at high speeds under complex mechanical and thermal loads. These loads can include centrifugal and thermal loads generated by motions of bearing components. COBRA-AHS offers several improvements over prior commercial bearing-analysis programs: It includes innovative probabilistic fatigue-life-estimating software that provides for computation of three-dimensional stress fields and incorporates stress-based (in contradistinction to prior load-based) mathematical models of fatigue life. It interacts automatically with the ANSYS finite-element code to generate finite-element models for estimating distributions of temperature and temperature-induced changes in dimensions in iterative thermal/dimensional analyses: thus, for example, it can be used to predict changes in clearances and thermal lockup. COBRA-AHS provides an improved graphical user interface that facilitates the iterative cycle of analysis and design by providing analysis results quickly in graphical form, enabling the user to control interactive runs without leaving the program environment, and facilitating transfer of plots and printed results for inclusion in design reports. Additional features include roller-edge stress prediction and influence of shaft and housing distortion on bearing performance.

  6. An efficient flexible-order model for 3D nonlinear water waves

    NASA Astrophysics Data System (ADS)

    Engsig-Karup, A. P.; Bingham, H. B.; Lindberg, O.

    2009-04-01

    The flexible-order, finite difference based fully nonlinear potential flow model described in [H.B. Bingham, H. Zhang, On the accuracy of finite difference solutions for nonlinear water waves, J. Eng. Math. 58 (2007) 211-228] is extended to three dimensions (3D). In order to obtain an optimal scaling of the solution effort multigrid is employed to precondition a GMRES iterative solution of the discretized Laplace problem. A robust multigrid method based on Gauss-Seidel smoothing is found to require special treatment of the boundary conditions along solid boundaries, and in particular on the sea bottom. A new discretization scheme using one layer of grid points outside the fluid domain is presented and shown to provide convergent solutions over the full physical and discrete parameter space of interest. Linear analysis of the fundamental properties of the scheme with respect to accuracy, robustness and energy conservation are presented together with demonstrations of grid independent iteration count and optimal scaling of the solution effort. Calculations are made for 3D nonlinear wave problems for steep nonlinear waves and a shoaling problem which show good agreement with experimental measurements and other calculations from the literature.

  7. Helium-3 MR q-space imaging with radial acquisition and iterative highly constrained back-projection.

    PubMed

    O'Halloran, Rafael L; Holmes, James H; Wu, Yu-Chien; Alexander, Andrew; Fain, Sean B

    2010-01-01

    An undersampled diffusion-weighted stack-of-stars acquisition is combined with iterative highly constrained back-projection to perform hyperpolarized helium-3 MR q-space imaging with combined regional correction of radiofrequency- and T1-related signal loss in a single breath-held scan. The technique is tested in computer simulations and phantom experiments and demonstrated in a healthy human volunteer with whole-lung coverage in a 13-sec breath-hold. Measures of lung microstructure at three different lung volumes are evaluated using inhaled gas volumes of 500 mL, 1000 mL, and 1500 mL to demonstrate feasibility. Phantom results demonstrate that the proposed technique is in agreement with theoretical values, as well as with a fully sampled two-dimensional Cartesian acquisition. Results from the volunteer study demonstrate that the root mean squared diffusion distance increased significantly from the 500-mL volume to the 1000-mL volume. This technique represents the first demonstration of a spatially resolved hyperpolarized helium-3 q-space imaging technique and shows promise for microstructural evaluation of lung disease in three dimensions. Copyright (c) 2009 Wiley-Liss, Inc.

  8. Analysis of LH Launcher Arrays (Like the ITER One) Using the TOPLHA Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maggiora, R.; Milanesio, D.; Vecchi, G.

    2009-11-26

    TOPLHA (Torino Polytechnic Lower Hybrid Antenna) code is an innovative tool for the 3D/1D simulation of Lower Hybrid (LH) antennas, i.e. accounting for realistic 3D waveguides geometry and for accurate 1D plasma models, and without restrictions on waveguide shape, including curvature. This tool provides a detailed performances prediction of any LH launcher, by computing the antenna scattering parameters, the current distribution, electric field maps and power spectra for any user-specified waveguide excitation. In addition, a fully parallelized and multi-cavity version of TOPLHA permits the analysis of large and complex waveguide arrays in a reasonable simulation time. A detailed analysis ofmore » the performances of the proposed ITER LH antenna geometry has been carried out, underlining the strong dependence of the antenna input parameters with respect to plasma conditions. A preliminary optimization of the antenna dimensions has also been accomplished. Electric current distribution on conductors, electric field distribution at the interface with plasma, and power spectra have been calculated as well. The analysis shows the strong capabilities of the TOPLHA code as a predictive tool and its usefulness to LH launcher arrays detailed design.« less

  9. Thermodynamic properties of triangle-well fluids in two dimensions: MC and MD simulations.

    PubMed

    Reyes, Yuri; Bárcenas, Mariana; Odriozola, Gerardo; Orea, Pedro

    2016-11-07

    With the aim of providing complementary data of the thermodynamics properties of the triangular well potential, the vapor/liquid phase diagrams for such potential with different interaction ranges were calculated in two dimensions by Monte Carlo and molecular dynamics simulations; also, the vapor/liquid interfacial tension was calculated. As reported for other interaction potentials, it was observed that the reduction of the dimensionality makes the phase diagram to shrink. Finally, with the aid of reported data for the same potential in three dimensions, it was observed that this potential does not follow the principle of corresponding states.

  10. Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research

    PubMed Central

    Weng, Chunhua

    2013-01-01

    Objective To review the methods and dimensions of data quality assessment in the context of electronic health record (EHR) data reuse for research. Materials and methods A review of the clinical research literature discussing data quality assessment methodology for EHR data was performed. Using an iterative process, the aspects of data quality being measured were abstracted and categorized, as well as the methods of assessment used. Results Five dimensions of data quality were identified, which are completeness, correctness, concordance, plausibility, and currency, and seven broad categories of data quality assessment methods: comparison with gold standards, data element agreement, data source agreement, distribution comparison, validity checks, log review, and element presence. Discussion Examination of the methods by which clinical researchers have investigated the quality and suitability of EHR data for research shows that there are fundamental features of data quality, which may be difficult to measure, as well as proxy dimensions. Researchers interested in the reuse of EHR data for clinical research are recommended to consider the adoption of a consistent taxonomy of EHR data quality, to remain aware of the task-dependence of data quality, to integrate work on data quality assessment from other fields, and to adopt systematic, empirically driven, statistically based methods of data quality assessment. Conclusion There is currently little consistency or potential generalizability in the methods used to assess EHR data quality. If the reuse of EHR data for clinical research is to become accepted, researchers should adopt validated, systematic methods of EHR data quality assessment. PMID:22733976

  11. A Third-Generation Adaptive Statistical Iterative Reconstruction Technique: Phantom Study of Image Noise, Spatial Resolution, Lesion Detectability, and Dose Reduction Potential.

    PubMed

    Euler, André; Solomon, Justin; Marin, Daniele; Nelson, Rendon C; Samei, Ehsan

    2018-06-01

    The purpose of this study was to assess image noise, spatial resolution, lesion detectability, and the dose reduction potential of a proprietary third-generation adaptive statistical iterative reconstruction (ASIR-V) technique. A phantom representing five different body sizes (12-37 cm) and a contrast-detail phantom containing lesions of five low-contrast levels (5-20 HU) and three sizes (2-6 mm) were deployed. Both phantoms were scanned on a 256-MDCT scanner at six different radiation doses (1.25-10 mGy). Images were reconstructed with filtered back projection (FBP), ASIR-V with 50% blending with FBP (ASIR-V 50%), and ASIR-V without blending (ASIR-V 100%). In the first phantom, noise properties were assessed by noise power spectrum analysis. Spatial resolution properties were measured by use of task transfer functions for objects of different contrasts. Noise magnitude, noise texture, and resolution were compared between the three groups. In the second phantom, low-contrast detectability was assessed by nine human readers independently for each condition. The dose reduction potential of ASIR-V was estimated on the basis of a generalized linear statistical regression model. On average, image noise was reduced 37.3% with ASIR-V 50% and 71.5% with ASIR-V 100% compared with FBP. ASIR-V shifted the noise power spectrum toward lower frequencies compared with FBP. The spatial resolution of ASIR-V was equivalent or slightly superior to that of FBP, except for the low-contrast object, which had lower resolution. Lesion detection significantly increased with both ASIR-V levels (p = 0.001), with an estimated radiation dose reduction potential of 15% ± 5% (SD) for ASIR-V 50% and 31% ± 9% for ASIR-V 100%. ASIR-V reduced image noise and improved lesion detection compared with FBP and had potential for radiation dose reduction while preserving low-contrast detectability.

  12. Deuterium results at the negative ion source test facility ELISE

    NASA Astrophysics Data System (ADS)

    Kraus, W.; Wünderlich, D.; Fantz, U.; Heinemann, B.; Bonomo, F.; Riedl, R.

    2018-05-01

    The ITER neutral beam system will be equipped with large radio frequency (RF) driven negative ion sources, with a cross section of 0.9 m × 1.9 m, which have to deliver extracted D- ion beams of 57 A at 1 MeV for 1 h. On the extraction from a large ion source experiment test facility, a source of half of this size is being operational since 2013. The goal of this experiment is to demonstrate a high operational reliability and to achieve the extracted current densities and beam properties required for ITER. Technical improvements of the source design and the RF system were necessary to provide reliable operation in steady state with an RF power of up to 300 kW. While in short pulses the required D- current density has almost been reached, the performance in long pulses is determined in particular in Deuterium by inhomogeneous and unstable currents of co-extracted electrons. By application of refined caesium evaporation and distribution procedures, and reduction and symmetrization of the electron currents, considerable progress has been made and up to 190 A/m2 D-, corresponding to 66% of the value required for ITER, have been extracted for 45 min.

  13. Scoping studies of shielding to reduce the shutdown dose rates in the ITER ports

    NASA Astrophysics Data System (ADS)

    Juárez, R.; Guirao, J.; Pampin, R.; Loughlin, M.; Polunovskiy, E.; Le Tonqueze, Y.; Bertalot, L.; Kolsek, A.; Ogando, F.; Udintsev, V.; Walsh, M.

    2018-07-01

    The planned in situ maintenance tasks in the ITER port interspace are fundamental to ensure the operation of equipment to control, evaluate and optimize the plasma performance during the entire facility lifetime. They are subject to a limit of shutdown dose rates (SDDR) of 100 µSv h‑1 after 106 s of cooling time, which is nowadays a design driver for the port plugs as well as the application of ALARA. Three conceptual shielding proposals outside the ITER ports are studied in this work to support the achievement of this objective. Considered one by one, they offer reductions ranging from 25% to 50%, which are rather significant. This paper shows that, by combining these shields, the SDDR as low as 57Δ µSv h‑1 can be achieved with a local approach considering only radiation from one port (no cross-talk form neighboring ports). The locally evaluated SDDR are well below the limit which is an essential pre-requisite for achieving 100µSv h‑1 in a global analysis including all contributions. Further studies will have to deal with a realistic port plug design and the cross-talks from neighbour ports.

  14. Parametric Thermal and Flow Analysis of ITER Diagnostic Shield Module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khodak, A.; Zhai, Y.; Wang, W.

    As part of the diagnostic port plug assembly, the ITER Diagnostic Shield Module (DSM) is designed to provide mechanical support and the plasma shielding while allowing access to plasma diagnostics. Thermal and hydraulic analysis of the DSM was performed using a conjugate heat transfer approach, in which heat transfer was resolved in both solid and liquid parts, and simultaneously, fluid dynamics analysis was performed only in the liquid part. ITER Diagnostic First Wall (DFW) and cooling tubing were also included in the analysis. This allowed direct modeling of the interface between DSM and DFW, and also direct assessment of themore » coolant flow distribution between the parts of DSM and DFW to ensure DSM design meets the DFW cooling requirements. Design of the DSM included voids filled with Boron Carbide pellets, allowing weight reduction while keeping shielding capability of the DSM. These voids were modeled as a continuous solid with smeared material properties using analytical relation for thermal conductivity. Results of the analysis lead to design modifications improving heat transfer efficiency of the DSM. Furthermore, the effect of design modifications on thermal performance as well as effect of Boron Carbide will be presented.« less

  15. A new method for computation of eigenvector derivatives with distinct and repeated eigenvalues in structural dynamic analysis

    NASA Astrophysics Data System (ADS)

    Li, Zhengguang; Lai, Siu-Kai; Wu, Baisheng

    2018-07-01

    Determining eigenvector derivatives is a challenging task due to the singularity of the coefficient matrices of the governing equations, especially for those structural dynamic systems with repeated eigenvalues. An effective strategy is proposed to construct a non-singular coefficient matrix, which can be directly used to obtain the eigenvector derivatives with distinct and repeated eigenvalues. This approach also has an advantage that only requires eigenvalues and eigenvectors of interest, without solving the particular solutions of eigenvector derivatives. The Symmetric Quasi-Minimal Residual (SQMR) method is then adopted to solve the governing equations, only the existing factored (shifted) stiffness matrix from an iterative eigensolution such as the subspace iteration method or the Lanczos algorithm is utilized. The present method can deal with both cases of simple and repeated eigenvalues in a unified manner. Three numerical examples are given to illustrate the accuracy and validity of the proposed algorithm. Highly accurate approximations to the eigenvector derivatives are obtained within a few iteration steps, making a significant reduction of the computational effort. This method can be incorporated into a coupled eigensolver/derivative software module. In particular, it is applicable for finite element models with large sparse matrices.

  16. Overview of Recent DIII-D Experimental Results

    NASA Astrophysics Data System (ADS)

    Fenstermacher, Max

    2015-11-01

    Recent DIII-D experiments have added to the ITER physics basis and to physics understanding for extrapolation to future devices. ELMs were suppressed by RMPs in He plasmas consistent with ITER non-nuclear phase conditions, and in steady state hybrid plasmas. Characteristics of the EHO during both standard high torque, and low torque enhanced pedestal QH-mode with edge broadband fluctuations were measured, including edge localized density fluctuations with a microwave imaging reflectometer. The path to Super H-mode was verified at high beta with a QH-mode edge, and in plasmas with ELMs triggered by Li granules. ITER acceptable TQ mitigation was obtained with low Ne fraction Shattered Pellet Injection. Divertor ne and Te data from Thomson Scattering confirm predicted drift-driven asymmetries in electron pressure, and X-divertor heat flux reduction and detachment were characterized. The crucial mechanisms for ExB shear control of turbulence were clarified. In collaboration with EAST, high beta-p scenarios were obtained with 80 % bootstrap fraction, high H-factor and stability limits, and large radius ITBs leading to low AE activity. Work supported by the US Department of Energy under DE-FC02-04ER54698 and DE-AC52-07NA27344.

  17. Dynamic re-weighted total variation technique and statistic Iterative reconstruction method for x-ray CT metal artifact reduction

    NASA Astrophysics Data System (ADS)

    Peng, Chengtao; Qiu, Bensheng; Zhang, Cheng; Ma, Changyu; Yuan, Gang; Li, Ming

    2017-07-01

    Over the years, the X-ray computed tomography (CT) has been successfully used in clinical diagnosis. However, when the body of the patient to be examined contains metal objects, the image reconstructed would be polluted by severe metal artifacts, which affect the doctor's diagnosis of disease. In this work, we proposed a dynamic re-weighted total variation (DRWTV) technique combined with the statistic iterative reconstruction (SIR) method to reduce the artifacts. The DRWTV method is based on the total variation (TV) and re-weighted total variation (RWTV) techniques, but it provides a sparser representation than TV and protects the tissue details better than RWTV. Besides, the DRWTV can suppress the artifacts and noise, and the SIR convergence speed is also accelerated. The performance of the algorithm is tested on both simulated phantom dataset and clinical dataset, which are the teeth phantom with two metal implants and the skull with three metal implants, respectively. The proposed algorithm (SIR-DRWTV) is compared with two traditional iterative algorithms, which are SIR and SIR constrained by RWTV regulation (SIR-RWTV). The results show that the proposed algorithm has the best performance in reducing metal artifacts and protecting tissue details.

  18. Parametric Thermal and Flow Analysis of ITER Diagnostic Shield Module

    DOE PAGES

    Khodak, A.; Zhai, Y.; Wang, W.; ...

    2017-06-19

    As part of the diagnostic port plug assembly, the ITER Diagnostic Shield Module (DSM) is designed to provide mechanical support and the plasma shielding while allowing access to plasma diagnostics. Thermal and hydraulic analysis of the DSM was performed using a conjugate heat transfer approach, in which heat transfer was resolved in both solid and liquid parts, and simultaneously, fluid dynamics analysis was performed only in the liquid part. ITER Diagnostic First Wall (DFW) and cooling tubing were also included in the analysis. This allowed direct modeling of the interface between DSM and DFW, and also direct assessment of themore » coolant flow distribution between the parts of DSM and DFW to ensure DSM design meets the DFW cooling requirements. Design of the DSM included voids filled with Boron Carbide pellets, allowing weight reduction while keeping shielding capability of the DSM. These voids were modeled as a continuous solid with smeared material properties using analytical relation for thermal conductivity. Results of the analysis lead to design modifications improving heat transfer efficiency of the DSM. Furthermore, the effect of design modifications on thermal performance as well as effect of Boron Carbide will be presented.« less

  19. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    NASA Technical Reports Server (NTRS)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  20. Nano-JASMINE: use of AGIS for the next astrometric satellite

    NASA Astrophysics Data System (ADS)

    Yamada, Y.; Gouda, N.; Lammers, U.

    The core data reduction for the Nano-JASMINE mission is planned to be done with Gaia's Astrometric Global Iterative Solution (AGIS). The collaboration started at 2007 prompted by Uwe Lammers' proposal. In addition to similar design and operating principles of the two missions, this is possible thanks to the encapsulation of all Gaia-specific aspects of AGIS in a Parameter Database. Nano-JASMINE will be the test bench for Gaia AGIS software. We present this idea in detail and the necessary practical steps to make AGIS work with Nano-JASMINE data. We also show the key mission parameters, goals, and status of the data reduction for the Nano-JASMINE.

  1. Locally linear embedding: dimension reduction of massive protostellar spectra

    NASA Astrophysics Data System (ADS)

    Ward, J. L.; Lumsden, S. L.

    2016-09-01

    We present the results of the application of locally linear embedding (LLE) to reduce the dimensionality of dereddened and continuum subtracted near-infrared spectra using a combination of models and real spectra of massive protostars selected from the Red MSX Source survey data base. A brief comparison is also made with two other dimension reduction techniques; principal component analysis (PCA) and Isomap using the same set of spectra as well as a more advanced form of LLE, Hessian locally linear embedding. We find that whilst LLE certainly has its limitations, it significantly outperforms both PCA and Isomap in classification of spectra based on the presence/absence of emission lines and provides a valuable tool for classification and analysis of large spectral data sets.

  2. General solution of a cosmological model induced from higher dimensions using a kinematical constraint

    NASA Astrophysics Data System (ADS)

    Akarsu, Özgür; Dereli, Tekin; Katırcı, Nihan; Sheftel, Mikhail B.

    2015-05-01

    In a recent study Akarsu and Dereli (Gen. Relativ. Gravit. 45:1211, 2013) discussed the dynamical reduction of a higher dimensional cosmological model which is augmented by a kinematical constraint characterized by a single real parameter, correlating and controlling the expansion of both the external (physical) and internal spaces. In that paper explicit solutions were found only for the case of three dimensional internal space (). Here we derive a general solution of the system using Lie group symmetry properties, in parametric form for arbitrary number of internal dimensions. We also investigate the dynamical reduction of the model as a function of cosmic time for various values of and generate parametric plots to discuss cosmologically relevant results.

  3. A Reduced Dimension Static, Linearized Kalman Filter and Smoother

    NASA Technical Reports Server (NTRS)

    Fukumori, I.

    1995-01-01

    An approximate Kalman filter and smoother, based on approximations of the state estimation error covariance matrix, is described. Approximations include a reduction of the effective state dimension, use of a static asymptotic error limit, and a time-invariant linearization of the dynamic model for error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. Examples of use come from TOPEX/POSEIDON.

  4. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix [A projected preconditioned conjugate gradient algorithm for computing a large eigenspace of a Hermitian matrix

    DOE PAGES

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-02-25

    Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less

  5. A PROOF OF CONVERGENCE OF THE HORN AND SCHUNCK OPTICAL FLOW ALGORITHM IN ARBITRARY DIMENSION

    PubMed Central

    LE TARNEC, LOUIS; DESTREMPES, FRANÇOIS; CLOUTIER, GUY; GARCIA, DAMIEN

    2013-01-01

    The Horn and Schunck (HS) method, which amounts to the Jacobi iterative scheme in the interior of the image, was one of the first optical flow algorithms. In this article, we prove the convergence of the HS method, whenever the problem is well-posed. Our result is shown in the framework of a generalization of the HS method in dimension n ≥ 1, with a broad definition of the discrete Laplacian. In this context, the condition for the convergence is that the intensity gradients are not all contained in a same hyperplane. Two other articles ([17] and [13]) claimed to solve this problem in the case n = 2, but it appears that both of these proofs are erroneous. Moreover, we explain why some standard results on the convergence of the Jacobi method do not apply for the HS problem, unless n = 1. It is also shown that the convergence of the HS scheme implies the convergence of the Gauss-Seidel and SOR schemes for the HS problem. PMID:26097625

  6. Brain gray matter phenotypes across the psychosis dimension

    PubMed Central

    Ivleva, Elena I.; Bidesi, Anup S.; Thomas, Binu P.; Meda, Shashwath A.; Francis, Alan; Moates, Amanda F.; Witte, Bradley; Keshavan, Matcheri S.; Tamminga, Carol A.

    2013-01-01

    This study sought to examine whole brain and regional gray matter (GM) phenotypes across the schizophrenia (SZ)–bipolar disorder psychosis dimension using voxel-based morphometry (VBM 8.0 with DARTEL segmentation/normalization) and semi-automated regional parcellation, FreeSurfer (FS 4.3.1/64 bit). 3T T1 MPRAGE images were acquired from 19 volunteers with schizophrenia (SZ), 16 with schizoaffective disorder (SAD), 17 with psychotic bipolar I disorder (BD-P) and 10 healthy controls (HC). Contrasted with HC, SZ showed extensive cortical GM reductions, most pronounced in fronto-temporal regions; SAD had GM reductions overlapping with SZ, albeit less extensive; and BD-P demonstrated no GM differences from HC. Within the psychosis dimension, BD-P showed larger volumes in fronto-temporal and other cortical/subcortical regions compared with SZ, whereas SAD showed intermediate GM volumes. The two volumetric methodologies, VBM and FS, revealed highly overlapping results for cortical GM, but partially divergent results for subcortical volumes (basal ganglia, amygdala). Overall, these findings suggest that individuals across the psychosis dimension show both overlapping and unique GM phenotypes: decreased GM, predominantly in fronto-temporal regions, is characteristic of SZ but not of psychotic BD-P, whereas SAD display GM deficits overlapping with SZ, albeit less extensive. PMID:23177922

  7. Brain gray matter phenotypes across the psychosis dimension.

    PubMed

    Ivleva, Elena I; Bidesi, Anup S; Thomas, Binu P; Meda, Shashwath A; Francis, Alan; Moates, Amanda F; Witte, Bradley; Keshavan, Matcheri S; Tamminga, Carol A

    2012-10-30

    This study sought to examine whole brain and regional gray matter (GM) phenotypes across the schizophrenia (SZ)-bipolar disorder psychosis dimension using voxel-based morphometry (VBM 8.0 with DARTEL segmentation/normalization) and semi-automated regional parcellation, FreeSurfer (FS 4.3.1/64 bit). 3T T1 MPRAGE images were acquired from 19 volunteers with schizophrenia (SZ), 16 with schizoaffective disorder (SAD), 17 with psychotic bipolar I disorder (BD-P) and 10 healthy controls (HC). Contrasted with HC, SZ showed extensive cortical GM reductions, most pronounced in fronto-temporal regions; SAD had GM reductions overlapping with SZ, albeit less extensive; and BD-P demonstrated no GM differences from HC. Within the psychosis dimension, BD-P showed larger volumes in fronto-temporal and other cortical/subcortical regions compared with SZ, whereas SAD showed intermediate GM volumes. The two volumetric methodologies, VBM and FS, revealed highly overlapping results for cortical GM, but partially divergent results for subcortical volumes (basal ganglia, amygdala). Overall, these findings suggest that individuals across the psychosis dimension show both overlapping and unique GM phenotypes: decreased GM, predominantly in fronto-temporal regions, is characteristic of SZ but not of psychotic BD-P, whereas SAD display GM deficits overlapping with SZ, albeit less extensive. Published by Elsevier Ireland Ltd.

  8. Continuous spin representations from group contraction

    NASA Astrophysics Data System (ADS)

    Khan, Abu M.; Ramond, Pierre

    2005-05-01

    We consider how the continuous spin representation (CSR) of the Poincaré group in four dimensions can be generated by dimensional reduction. The analysis uses the front-form little group in five dimensions, which must yield the Euclidean group E(2), the little group of the CSR. We consider two cases, one is the single spin massless representation of the Poincaré group in five dimensions, the other is the infinite component Majorana equation, which describes an infinite tower of massive states in five dimensions. In the first case, the double singular limit j, R →∞, with j /R fixed, where R is the Kaluza-Klein radius of the fifth dimension, and j is the spin of the particle in five dimensions, yields the CSR in four dimensions. It amounts to the Inönü-Wigner contraction, with the inverse Kaluza-Klein radius as contraction parameter. In the second case, the CSR appears only by taking a triple singular limit, where an internal coordinate of the Majorana theory goes to infinity, while leaving its ratio to the Kaluza-Klein radius fixed.

  9. On the dimension of complex responses in nonlinear structural vibrations

    NASA Astrophysics Data System (ADS)

    Wiebe, R.; Spottswood, S. M.

    2016-07-01

    The ability to accurately model engineering systems under extreme dynamic loads would prove a major breakthrough in many aspects of aerospace, mechanical, and civil engineering. Extreme loads frequently induce both nonlinearities and coupling which increase the complexity of the response and the computational cost of finite element models. Dimension reduction has recently gained traction and promises the ability to distill dynamic responses down to a minimal dimension without sacrificing accuracy. In this context, the dimensionality of a response is related to the number of modes needed in a reduced order model to accurately simulate the response. Thus, an important step is characterizing the dimensionality of complex nonlinear responses of structures. In this work, the dimensionality of the nonlinear response of a post-buckled beam is investigated. Significant detail is dedicated to carefully introducing the experiment, the verification of a finite element model, and the dimensionality estimation algorithm as it is hoped that this system may help serve as a benchmark test case. It is shown that with minor modifications, the method of false nearest neighbors can quantitatively distinguish between the response dimension of various snap-through, non-snap-through, random, and deterministic loads. The state-space dimension of the nonlinear system in question increased from 2-to-10 as the system response moved from simple, low-level harmonic to chaotic snap-through. Beyond the problem studied herein, the techniques developed will serve as a prescriptive guide in developing fast and accurate dimensionally reduced models of nonlinear systems, and eventually as a tool for adaptive dimension-reduction in numerical modeling. The results are especially relevant in the aerospace industry for the design of thin structures such as beams, panels, and shells, which are all capable of spatio-temporally complex dynamic responses that are difficult and computationally expensive to model.

  10. Understanding integrated care: a comprehensive conceptual framework based on the integrative functions of primary care

    PubMed Central

    Valentijn, Pim P.; Schepman, Sanneke M.; Opheij, Wilfrid; Bruijnzeels, Marc A.

    2013-01-01

    Introduction Primary care has a central role in integrating care within a health system. However, conceptual ambiguity regarding integrated care hampers a systematic understanding. This paper proposes a conceptual framework that combines the concepts of primary care and integrated care, in order to understand the complexity of integrated care. Methods The search method involved a combination of electronic database searches, hand searches of reference lists (snowball method) and contacting researchers in the field. The process of synthesizing the literature was iterative, to relate the concepts of primary care and integrated care. First, we identified the general principles of primary care and integrated care. Second, we connected the dimensions of integrated care and the principles of primary care. Finally, to improve content validity we held several meetings with researchers in the field to develop and refine our conceptual framework. Results The conceptual framework combines the functions of primary care with the dimensions of integrated care. Person-focused and population-based care serve as guiding principles for achieving integration across the care continuum. Integration plays complementary roles on the micro (clinical integration), meso (professional and organisational integration) and macro (system integration) level. Functional and normative integration ensure connectivity between the levels. Discussion The presented conceptual framework is a first step to achieve a better understanding of the inter-relationships among the dimensions of integrated care from a primary care perspective. PMID:23687482

  11. Understanding integrated care: a comprehensive conceptual framework based on the integrative functions of primary care.

    PubMed

    Valentijn, Pim P; Schepman, Sanneke M; Opheij, Wilfrid; Bruijnzeels, Marc A

    2013-01-01

    Primary care has a central role in integrating care within a health system. However, conceptual ambiguity regarding integrated care hampers a systematic understanding. This paper proposes a conceptual framework that combines the concepts of primary care and integrated care, in order to understand the complexity of integrated care. The search method involved a combination of electronic database searches, hand searches of reference lists (snowball method) and contacting researchers in the field. The process of synthesizing the literature was iterative, to relate the concepts of primary care and integrated care. First, we identified the general principles of primary care and integrated care. Second, we connected the dimensions of integrated care and the principles of primary care. Finally, to improve content validity we held several meetings with researchers in the field to develop and refine our conceptual framework. The conceptual framework combines the functions of primary care with the dimensions of integrated care. Person-focused and population-based care serve as guiding principles for achieving integration across the care continuum. Integration plays complementary roles on the micro (clinical integration), meso (professional and organisational integration) and macro (system integration) level. Functional and normative integration ensure connectivity between the levels. The presented conceptual framework is a first step to achieve a better understanding of the inter-relationships among the dimensions of integrated care from a primary care perspective.

  12. A Relaxation Method for Nonlocal and Non-Hermitian Operators

    NASA Astrophysics Data System (ADS)

    Lagaris, I. E.; Papageorgiou, D. G.; Braun, M.; Sofianos, S. A.

    1996-06-01

    We present a grid method to solve the time dependent Schrödinger equation (TDSE). It uses the Crank-Nicholson scheme to propagate the wavefunction forward in time and finite differences to approximate the derivative operators. The resulting sparse linear system is solved by the symmetric successive overrelaxation iterative technique. The method handles local and nonlocal interactions and Hamiltonians that correspond to either Hermitian or to non-Hermitian matrices with real eigenvalues. We test the method by solving the TDSE in the imaginary time domain, thus converting the time propagation to asymptotic relaxation. Benchmark problems solved are both in one and two dimensions, with local, nonlocal, Hermitian and non-Hermitian Hamiltonians.

  13. Approximation of the Newton Step by a Defect Correction Process

    NASA Technical Reports Server (NTRS)

    Arian, E.; Batterman, A.; Sachs, E. W.

    1999-01-01

    In this paper, an optimal control problem governed by a partial differential equation is considered. The Newton step for this system can be computed by solving a coupled system of equations. To do this efficiently with an iterative defect correction process, a modifying operator is introduced into the system. This operator is motivated by local mode analysis. The operator can be used also for preconditioning in Generalized Minimum Residual (GMRES). We give a detailed convergence analysis for the defect correction process and show the derivation of the modifying operator. Numerical tests are done on the small disturbance shape optimization problem in two dimensions for the defect correction process and for GMRES.

  14. NeatMap--non-clustering heat map alternatives in R.

    PubMed

    Rajaram, Satwik; Oono, Yoshi

    2010-01-22

    The clustered heat map is the most popular means of visualizing genomic data. It compactly displays a large amount of data in an intuitive format that facilitates the detection of hidden structures and relations in the data. However, it is hampered by its use of cluster analysis which does not always respect the intrinsic relations in the data, often requiring non-standardized reordering of rows/columns to be performed post-clustering. This sometimes leads to uninformative and/or misleading conclusions. Often it is more informative to use dimension-reduction algorithms (such as Principal Component Analysis and Multi-Dimensional Scaling) which respect the topology inherent in the data. Yet, despite their proven utility in the analysis of biological data, they are not as widely used. This is at least partially due to the lack of user-friendly visualization methods with the visceral impact of the heat map. NeatMap is an R package designed to meet this need. NeatMap offers a variety of novel plots (in 2 and 3 dimensions) to be used in conjunction with these dimension-reduction techniques. Like the heat map, but unlike traditional displays of such results, it allows the entire dataset to be displayed while visualizing relations between elements. It also allows superimposition of cluster analysis results for mutual validation. NeatMap is shown to be more informative than the traditional heat map with the help of two well-known microarray datasets. NeatMap thus preserves many of the strengths of the clustered heat map while addressing some of its deficiencies. It is hoped that NeatMap will spur the adoption of non-clustering dimension-reduction algorithms.

  15. Applications of a direct/iterative design method to complex transonic configurations

    NASA Technical Reports Server (NTRS)

    Smith, Leigh Ann; Campbell, Richard L.

    1992-01-01

    The current study explores the use of an automated direct/iterative design method for the reduction of drag in transport configurations, including configurations with engine nacelles. The method requires the user to choose a proper target-pressure distribution and then develops a corresponding airfoil section. The method can be applied to two-dimensional airfoil sections or to three-dimensional wings. The three cases that are presented show successful application of the method for reducing drag from various sources. The first two cases demonstrate the use of the method to reduce induced drag by designing to an elliptic span-load distribution and to reduce wave drag by decreasing the shock strength for a given lift. In the second case, a body-mounted nacelle is added and the method is successfully used to eliminate increases in wing drag associated with the nacelle addition by designing to an arbitrary pressure distribution as a result of the redesigning of a wing in combination with a given underwing nacelle to clean-wing, target-pressure distributions. These cases illustrate several possible uses of the method for reducing different types of drag. The magnitude of the obtainable drag reduction varies with the constraints of the problem and the configuration to be modified.

  16. Comparison between cylindrical and prismatic lithium-ion cell costs using a process based cost model

    NASA Astrophysics Data System (ADS)

    Ciez, Rebecca E.; Whitacre, J. F.

    2017-02-01

    The relative size and age of the US electric vehicle market means that a few vehicles are able to drive market-wide trends in the battery chemistries and cell formats on the road today. Three lithium-ion chemistries account for nearly all of the storage capacity, and half of the cells are cylindrical. However, no specific model exists to examine the costs of manufacturing these cylindrical cells. Here we present a process-based cost model tailored to the cylindrical lithium-ion cells currently used in the EV market. We examine the costs for varied cell dimensions, electrode thicknesses, chemistries, and production volumes. Although cost savings are possible from increasing cell dimensions and electrode thicknesses, economies of scale have already been reached, and future cost reductions from increased production volumes are minimal. Prismatic cells, which are able to further capitalize on the cost reduction from larger formats, can offer further reductions than those possible for cylindrical cells.

  17. The human dimensions of energy use in buildings: A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Oca, Simona; Hong, Tianzhen; Langevin, Jared

    The “human dimensions” of energy use in buildings refer to the energy-related behaviors of key stakeholders that affect energy use over the building life cycle. Stakeholders include building designers, operators, managers, engineers, occupants, industry, vendors, and policymakers, who directly or indirectly influence the acts of designing, constructing, living, operating, managing, and regulating the built environments, from individual building up to the urban scale. Among factors driving high-performance buildings, human dimensions play a role that is as significant as that of technological advances. However, this factor is not well understood, and, as a result, human dimensions are often ignored or simplifiedmore » by stakeholders. This work presents a review of the literature on human dimensions of building energy use to assess the state-of-the-art in this topic area. The paper highlights research needs for fully integrating human dimensions into the building design and operation processes with the goal of reducing energy use in buildings while enhancing occupant comfort and productivity. This research focuses on identifying key needs for each stakeholder involved in a building's life cycle and takes an interdisciplinary focus that spans the fields of architecture and engineering design, sociology, data science, energy policy, codes, and standards to provide targeted insights. Greater understanding of the human dimensions of energy use has several potential benefits including reductions in operating cost for building owners; enhanced comfort conditions and productivity for building occupants; more effective building energy management and automation systems for building operators and energy managers; and the integration of more accurate control logic into the next generation of human-in-the-loop technologies. The review concludes by summarizing recommendations for policy makers and industry stakeholders for developing codes, standards, and technologies that can leverage the human dimensions of energy use to reliably predict and achieve energy use reductions in the residential and commercial buildings sectors.« less

  18. The human dimensions of energy use in buildings: A review

    DOE PAGES

    D'Oca, Simona; Hong, Tianzhen; Langevin, Jared

    2017-08-19

    The “human dimensions” of energy use in buildings refer to the energy-related behaviors of key stakeholders that affect energy use over the building life cycle. Stakeholders include building designers, operators, managers, engineers, occupants, industry, vendors, and policymakers, who directly or indirectly influence the acts of designing, constructing, living, operating, managing, and regulating the built environments, from individual building up to the urban scale. Among factors driving high-performance buildings, human dimensions play a role that is as significant as that of technological advances. However, this factor is not well understood, and, as a result, human dimensions are often ignored or simplifiedmore » by stakeholders. This work presents a review of the literature on human dimensions of building energy use to assess the state-of-the-art in this topic area. The paper highlights research needs for fully integrating human dimensions into the building design and operation processes with the goal of reducing energy use in buildings while enhancing occupant comfort and productivity. This research focuses on identifying key needs for each stakeholder involved in a building's life cycle and takes an interdisciplinary focus that spans the fields of architecture and engineering design, sociology, data science, energy policy, codes, and standards to provide targeted insights. Greater understanding of the human dimensions of energy use has several potential benefits including reductions in operating cost for building owners; enhanced comfort conditions and productivity for building occupants; more effective building energy management and automation systems for building operators and energy managers; and the integration of more accurate control logic into the next generation of human-in-the-loop technologies. The review concludes by summarizing recommendations for policy makers and industry stakeholders for developing codes, standards, and technologies that can leverage the human dimensions of energy use to reliably predict and achieve energy use reductions in the residential and commercial buildings sectors.« less

  19. Post-extraction mesio-distal gap reduction assessment by confocal laser scanning microscopy - a clinical 3-month follow-up study.

    PubMed

    García-Herraiz, Ariadna; Silvestre, Francisco Javier; Leiva-García, Rafael; Crespo-Abril, Fortunato; García-Antón, José

    2017-05-01

    The aim of this 3-month follow-up study is to quantify the reduction in the mesio-distal gap dimension (MDGD) that occurs after tooth extraction through image analysis of three-dimensional images obtained with the confocal laser scanning microscopy (CLSM) technique. Following tooth extraction, impressions of 79 patients 1 month and 72 patients 3 months after tooth extraction were obtained. Cast models were processed by CLSM, and MDGD changes between time points were measured. The mean mesio-distal gap reduction 1 month after tooth extraction was 343.4 μm and 3 months after tooth extraction was 672.3 μm. The daily mean gap reduction rate during the first term (between baseline and 1 month post-extraction measurements) was 10.3 μm/day and during the second term (between 1 and 3 months) was 5.4 μm/day. The mesio-distal gap reduction is higher during the first month following the extraction and continues in time, but to a lesser extent. When the inter-dental contacts were absent, the mesio-distal gap reduction is lower. When a molar tooth is extracted or the distal tooth to the edentulous space does not occlude with an antagonist, the mesio-distal gap reduction is larger. The consideration of mesio-distal gap dimension changes can help improve dental treatment planning. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Spectral Data Reduction via Wavelet Decomposition

    NASA Technical Reports Server (NTRS)

    Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)

    2002-01-01

    The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.

  1. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION

    PubMed Central

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    2016-01-01

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method—named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)—for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results. PMID:26778864

  2. A dimension reduction method for flood compensation operation of multi-reservoir system

    NASA Astrophysics Data System (ADS)

    Jia, B.; Wu, S.; Fan, Z.

    2017-12-01

    Multiple reservoirs cooperation compensation operations coping with uncontrolled flood play vital role in real-time flood mitigation. This paper come up with a reservoir flood compensation operation index (ResFCOI), which formed by elements of flood control storage, flood inflow volume, flood transmission time and cooperation operations period, then establish a flood cooperation compensation operations model of multi-reservoir system, according to the ResFCOI to determine a computational order of each reservoir, and lastly the differential evolution algorithm is implemented for computing single reservoir flood compensation optimization in turn, so that a dimension reduction method is formed to reduce computational complexity. Shiguan River Basin with two large reservoirs and an extensive uncontrolled flood area, is used as a case study, results show that (a) reservoirs' flood discharges and the uncontrolled flood are superimposed at Jiangjiaji Station, while the formed flood peak flow is as small as possible; (b) cooperation compensation operations slightly increase in usage of flood storage capacity in reservoirs, when comparing to rule-based operations; (c) it takes 50 seconds in average when computing a cooperation compensation operations scheme. The dimension reduction method to guide flood compensation operations of multi-reservoir system, can make each reservoir adjust its flood discharge strategy dynamically according to the uncontrolled flood magnitude and pattern, so as to mitigate the downstream flood disaster.

  3. SU-G-IeP2-12: The Effect of Iterative Reconstruction and CT Tube Voltage On Hounsfield Unit Values of Iodinated Contrast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogden, K; Greene-Donnelly, K; Vallabhaneni, D

    Purpose: To investigate the effects of changing iterative reconstruction strength and tube voltage on Hounsfield Unit (HU) values of varying concentrations of Iodinated contrast medium in a phantom. Method: Iodinated contrast (Omnipaque 300, GE Healthcare, Princeton NJ) was diluted with distilled water to concentrations of 0.6, 0.9, 1.8, 3.6, 7.2, and 10.8 mg/mL of Iodine. The solutions were scanned in a patient equivalent water phantom on two MDCT scanners: VCT 64 slice (GE Medical Systems, Waukesha, WI) and an Aquilion One 320 slice scanner (Toshiba America Medical Systems, Tustin CA). The phantom was scanned at 80, 100, 120, 140 kVmore » using 400, 255, 180, and 130 mAs, respectively, for the VCT scanner, and 80, 100, 120, and 135 kV using 400, 250, 200, and 150 mAs, respectively, on the Aquilion One. Images were reconstructed at 2.5 mm (VCT) and 0.5 mm (Aquilion One). The VCT images were reconstructed using Advanced Statistical Iterative Reconstruction (ASIR) at 6 different strengths: 0%, 20%, 40%, 60%, 80%, and 100%. Aquilion One images were reconstructed using Adaptive Iterative Dose Reduction (AIDR) at 4 strengths: no AIDR, Weak AIDR, Standard AIDR, and Strong AIDR. Regions of interest (ROIs) were drawn on the images to measure the HU values and standard deviations of the diluted contrast. Second order polynomials were used to fit the HU values as a function of Iodine concentration. Results: For both scanners, there was no significant effect of changing the iterative reconstruction strength. The polynomial fits yielded goodness-of-fit (R2) values averaging 0.997. Conclusion: Changing the strength of the iterative reconstruction has no significant effect on the HU values of Iodinated contrast in a tissue-equivalent phantom. Fit values of HU vs Iodine concentration are useful in quantitative imaging protocols such as the determination of cardiac output from time-density curves in the main pulmonary artery.« less

  4. Data Reduction Algorithm Using Nonnegative Matrix Factorization with Nonlinear Constraints

    NASA Astrophysics Data System (ADS)

    Sembiring, Pasukat

    2017-12-01

    Processing ofdata with very large dimensions has been a hot topic in recent decades. Various techniques have been proposed in order to execute the desired information or structure. Non- Negative Matrix Factorization (NMF) based on non-negatives data has become one of the popular methods for shrinking dimensions. The main strength of this method is non-negative object, the object model by a combination of some basic non-negative parts, so as to provide a physical interpretation of the object construction. The NMF is a dimension reduction method thathasbeen used widely for numerous applications including computer vision,text mining, pattern recognitions,and bioinformatics. Mathematical formulation for NMF did not appear as a convex optimization problem and various types of algorithms have been proposed to solve the problem. The Framework of Alternative Nonnegative Least Square(ANLS) are the coordinates of the block formulation approaches that have been proven reliable theoretically and empirically efficient. This paper proposes a new algorithm to solve NMF problem based on the framework of ANLS.This algorithm inherits the convergenceproperty of the ANLS framework to nonlinear constraints NMF formulations.

  5. Radiation dose and image quality in pediatric chest CT: effects of iterative reconstruction in normal weight and overweight children.

    PubMed

    Yoon, Haesung; Kim, Myung-Joon; Yoon, Choon-Sik; Choi, Jiin; Shin, Hyun Joo; Kim, Hyun Gi; Lee, Mi-Jung

    2015-03-01

    New CT reconstruction techniques may help reduce the burden of ionizing radiation. To quantify radiation dose reduction when performing pediatric chest CT using a low-dose protocol and 50% adaptive statistical iterative reconstruction (ASIR) compared with age/gender-matched chest CT using a conventional dose protocol and reconstructed with filtered back projection (control group) and to determine its effect on image quality in normal weight and overweight children. We retrospectively reviewed 40 pediatric chest CT (M:F = 21:19; range: 0.1-17 years) in both groups. Radiation dose was compared between the two groups using paired Student's t-test. Image quality including noise, sharpness, artifacts and diagnostic acceptability was subjectively assessed by three pediatric radiologists using a four-point scale (superior, average, suboptimal, unacceptable). Eight children in the ASIR group and seven in the control group were overweight. All radiation dose parameters were significantly lower in the ASIR group (P < 0.01) with a greater than 57% dose reduction in overweight children. Image noise was higher in the ASIR group in both normal weight and overweight children. Only one scan in the ASIR group (1/40, 2.5%) was rated as diagnostically suboptimal and there was no unacceptable study. In both normal weight and overweight children, the ASIR technique is associated with a greater than 57% mean dose reduction, without significantly impacting diagnostic image quality in pediatric chest CT examinations. However, CT scans in overweight children may have a greater noise level, even when using the ASIR technique.

  6. A graphical approach to optimizing variable-kernel smoothing parameters for improved deformable registration of CT and cone beam CT images

    NASA Astrophysics Data System (ADS)

    Hart, Vern; Burrow, Damon; Li, X. Allen

    2017-08-01

    A systematic method is presented for determining optimal parameters in variable-kernel deformable image registration of cone beam CT and CT images, in order to improve accuracy and convergence for potential use in online adaptive radiotherapy. Assessed conditions included the noise constant (symmetric force demons), the kernel reduction rate, the kernel reduction percentage, and the kernel adjustment criteria. Four such parameters were tested in conjunction with reductions of 5, 10, 15, 20, 30, and 40%. Noise constants ranged from 1.0 to 1.9 for pelvic images in ten prostate cancer patients. A total of 516 tests were performed and assessed using the structural similarity index. Registration accuracy was plotted as a function of iteration number and a least-squares regression line was calculated, which implied an average improvement of 0.0236% per iteration. This baseline was used to determine if a given set of parameters under- or over-performed. The most accurate parameters within this range were applied to contoured images. The mean Dice similarity coefficient was calculated for bladder, prostate, and rectum with mean values of 98.26%, 97.58%, and 96.73%, respectively; corresponding to improvements of 2.3%, 9.8%, and 1.2% over previously reported values for the same organ contours. This graphical approach to registration analysis could aid in determining optimal parameters for Demons-based algorithms. It also establishes expectation values for convergence rates and could serve as an indicator of non-physical warping, which often occurred in cases  >0.6% from the regression line.

  7. Characterization of a CT unit for the detection of low contrast structures

    NASA Astrophysics Data System (ADS)

    Viry, Anais; Racine, Damien; Ba, Alexandre; Becce, Fabio; Bochud, François O.; Verdun, Francis R.

    2017-03-01

    Major technological advances in CT enable the acquisition of high quality images while minimizing patient exposure. The goal of this study was to objectively compare two generations of iterative reconstruction (IR) algorithms for the detection of low contrast structures. An abdominal phantom (QRM, Germany), containing 8, 6 and 5mm-diameter spheres (with a nominal contrast of 20HU) was scanned using our standard clinical noise index settings on a GE CT: "Discovery 750 HD". Two additional rings (2.5 and 5 cm) were also added to the phantom. Images were reconstructed using FBP, ASIR-50%, and VEO (full statistical Model Based Iterative Reconstruction, MBIR). The reconstructed slice thickness was 2.5 mm except 0.625 mm for VEO reconstructions. NPS was calculated to highlight the potential noise reduction of each IR algorithm. To assess LCD (low Contrast Detectability), a Channelized Hotelling Observer (CHO) with 10 DDoG channels was used with the area under the curve (AUC) as a figure of merit. Spheres contrast was also measured. ASIR-50% allowed a noise reduction by a factor two when compared to FBP without an improvement of the LCD. VEO allowed an additional noise reduction with a thinner slice thickness compared to ASIR-50% but with a major improvement of the LCD especially for the large-sized phantom and small lesions. Contrast decreased up to 10% with the phantom size increase for FBP and ASIR-50% and remained constant with VEO. VEO is particularly interesting for LCD when dealing with large patients and small lesion sizes and when the detection task is difficult.

  8. An Iterated Global Mascon Solution with Focus on Land Ice Mass Evolution

    NASA Technical Reports Server (NTRS)

    Luthcke, S. B.; Sabaka, T.; Rowlands, D. D.; Lemoine, F. G.; Loomis, B. D.; Boy, J. P.

    2012-01-01

    Land ice mass evolution is determined from a new GRACE global mascon solution. The solution is estimated directly from the reduction of the inter-satellite K-band range rate observations taking into account the full noise covariance, and formally iterating the solution. The new solution increases signal recovery while reducing the GRACE KBRR observation residuals. The mascons are estimated with 10-day and 1-arc-degree equal area sampling, applying anisotropic constraints for enhanced temporal and spatial resolution of the recovered land ice signal. The details of the solution are presented including error and resolution analysis. An Ensemble Empirical Mode Decomposition (EEMD) adaptive filter is applied to the mascon solution time series to compute timing of balance seasons and annual mass balances. The details and causes of the spatial and temporal variability of the land ice regions studied are discussed.

  9. Domain decomposition methods for systems of conservation laws: Spectral collocation approximations

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio

    1989-01-01

    Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.

  10. LCAMP: Location Constrained Approximate Message Passing for Compressed Sensing MRI

    PubMed Central

    Sung, Kyunghyun; Daniel, Bruce L; Hargreaves, Brian A

    2016-01-01

    Iterative thresholding methods have been extensively studied as faster alternatives to convex optimization methods for solving large-sized problems in compressed sensing. A novel iterative thresholding method called LCAMP (Location Constrained Approximate Message Passing) is presented for reducing computational complexity and improving reconstruction accuracy when a nonzero location (or sparse support) constraint can be obtained from view shared images. LCAMP modifies the existing approximate message passing algorithm by replacing the thresholding stage with a location constraint, which avoids adjusting regularization parameters or thresholding levels. This work is first compared with other conventional reconstruction methods using random 1D signals and then applied to dynamic contrast-enhanced breast MRI to demonstrate the excellent reconstruction accuracy (less than 2% absolute difference) and low computation time (5 - 10 seconds using Matlab) with highly undersampled 3D data (244 × 128 × 48; overall reduction factor = 10). PMID:23042658

  11. High density operation for reactor-relevant power exhaust

    NASA Astrophysics Data System (ADS)

    Wischmeier, M.; ASDEX Upgrade Team; Jet Efda Contributors

    2015-08-01

    With increasing size of a tokamak device and associated fusion power gain an increasing power flux density towards the divertor needs to be handled. A solution for handling this power flux is crucial for a safe and economic operation. Using purely geometric arguments in an ITER-like divertor this power flux can be reduced by approximately a factor 100. Based on a conservative extrapolation of current technology for an integrated engineering approach to remove power deposited on plasma facing components a further reduction of the power flux density via volumetric processes in the plasma by up to a factor of 50 is required. Our current ability to interpret existing power exhaust scenarios using numerical transport codes is analyzed and an operational scenario as a potential solution for ITER like divertors under high density and highly radiating reactor-relevant conditions is presented. Alternative concepts for risk mitigation as well as strategies for moving forward are outlined.

  12. A geochemical transport model for redox-controlled movement of mineral fronts in groundwater flow systems: A case of nitrate removal by oxidation of pyrite

    USGS Publications Warehouse

    Engesgaard, Peter; Kipp, Kenneth L.

    1992-01-01

    A one-dimensional prototype geochemical transport model was developed in order to handle simultaneous precipitation-dissolution and oxidation-reduction reactions governed by chemical equilibria. Total aqueous component concentrations are the primary dependent variables, and a sequential iterative approach is used for the calculation. The model was verified by analytical and numerical comparisons and is able to simulate sharp mineral fronts. At a site in Denmark, denitrification has been observed by oxidation of pyrite. Simulation of nitrate movement at this site showed a redox front movement rate of 0.58 m yr−1, which agreed with calculations of others. It appears that the sequential iterative approach is the most practical for extension to multidimensional simulation and for handling large numbers of components and reactions. However, slow convergence may limit the size of redox systems that can be handled.

  13. Universal and integrable nonlinear evolution systems of equations in 2+1 dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maccari, A.

    1997-08-01

    Integrable systems of nonlinear partial differential equations (PDEs) are obtained from integrable equations in 2+1 dimensions, by means of a reduction method of broad applicability based on Fourier expansion and spatio{endash}temporal rescalings, which is asymptotically exact in the limit of weak nonlinearity. The integrability by the spectral transform is explicitly demonstrated, because the corresponding Lax pairs have been derived, applying the same reduction method to the Lax pair of the initial equation. These systems of nonlinear PDEs are likely to be of applicative relevance and have a {open_quotes}universal{close_quotes} character, inasmuch as they may be derived from a very large classmore » of nonlinear evolution equations with a linear dispersive part. {copyright} {ital 1997 American Institute of Physics.}« less

  14. Workplace interventions to reduce HIV and TB stigma among health care workers - Where do we go from here?

    PubMed

    Siegel, Jacob; Yassi, Annalee; Rau, Asta; Buxton, Jane A; Wouters, Edwin; Engelbrecht, Michelle C; Uebel, Kerry E; Nophale, Letshego E

    2015-01-01

    Fear of stigma and discrimination among health care workers (HCWs) in South African hospitals is thought to be a major factor in the high rates of HIV and tuberculosis infection experienced in the health care workforce. The aim of the current study is to inform the development of a stigma reduction intervention in the context of a large multicomponent trial. We analysed relevant results of four feasibility studies conducted in the lead up to the trial. Our findings suggest that a stigma reduction campaign must address community and structural level drivers of stigma, in addition to individual level concerns, through a participatory and iterative approach. Importantly, stigma reduction must not only be embedded in the institutional management of HCWs but also be attentive to the localised needs of HCWs themselves.

  15. Lateral conduction effects on heat-transfer data obtained with the phase-change paint technique

    NASA Technical Reports Server (NTRS)

    Maise, G.; Rossi, M. J.

    1974-01-01

    A computerized tool, CAPE, (Conduction Analysis Program using Eigenvalues) has been developed to account for lateral heat conduction in wind tunnel models in the data reduction of the phase-change paint technique. The tool also accounts for the effects of finite thickness (thin wings) and surface curvature. A special reduction procedure using just one time of melt is also possible on leading edges. A novel iterative numerical scheme was used, with discretized spatial coordinates but analytic integration in time, to solve the inverse conduction problem involved in the data reduction. A yes-no chart is provided which tells the test engineer when various corrections are large enough so that CAPE should be used. The accuracy of the phase-change paint technique in the presence of finite thickness and lateral conduction is also investigated.

  16. Catalan speakers' perception of word stress in unaccented contexts.

    PubMed

    Ortega-Llebaria, Marta; del Mar Vanrell, Maria; Prieto, Pilar

    2010-01-01

    In unaccented contexts, formant frequency differences related to vowel reduction constitute a consistent cue to word stress in English, whereas in languages such as Spanish that have no systematic vowel reduction, stress perception is based on duration and intensity cues. This article examines the perception of word stress by speakers of Central Catalan, in which, due to its vowel reduction patterns, words either alternate stressed open vowels with unstressed mid-central vowels as in English or contain no vowel quality cues to stress, as in Spanish. Results show that Catalan listeners perceive stress based mainly on duration cues in both word types. Other cues pattern together with duration to make stress perception more robust. However, no single cue is absolutely necessary and trading effects compensate for a lack of differentiation in one dimension by changes in another dimension. In particular, speakers identify longer mid-central vowels as more stressed than shorter open vowels. These results and those obtained in other stress-accent languages provide cumulative evidence that word stress is perceived independently of pitch accents by relying on a set of cues with trading effects so that no single cue, including formant frequency differences related to vowel reduction, is absolutely necessary for stress perception.

  17. Spatial and contrast resolution of ultralow dose dentomaxillofacial CT imaging using iterative reconstruction technology

    PubMed Central

    Bischel, Alexander; Stratis, Andreas; Bosmans, Hilde; Jacobs, Reinhilde; Gassner, Eva-Maria; Puelacher, Wolfgang; Pauwels, Ruben

    2017-01-01

    Objectives: The objective of this study was to determine how iterative reconstruction technology (IRT) influences contrast and spatial resolution in ultralow-dose dentomaxillofacial CT imaging. Methods: A polymethyl methacrylate phantom with various inserts was scanned using a reference protocol (RP) at CT dose index volume 36.56 mGy, a sinus protocol at 18.28 mGy and ultralow-dose protocols (LD) at 4.17 mGy, 2.36 mGy, 0.99 mGy and 0.53 mGy. All data sets were reconstructed using filtered back projection (FBP) and the following IRTs: adaptive statistical iterative reconstructions (ASIRs) (ASIR-50, ASIR-100) and model-based iterative reconstruction (MBIR). Inserts containing line-pair patterns and contrast detail patterns for three different materials were scored by three observers. Observer agreement was analyzed using Cohen's kappa and difference in performance between the protocols and reconstruction was analyzed with Dunn's test at α = 0.05. Results: Interobserver agreement was acceptable with a mean kappa value of 0.59. Compared with the RP using FBP, similar scores were achieved at 2.36 mGy using MBIR. MIBR reconstructions showed the highest noise suppression as well as good contrast even at the lowest doses. Overall, ASIR reconstructions did not outperform FBP. Conclusions: LD and MBIR at a dose reduction of >90% may show no significant differences in spatial and contrast resolution compared with an RP and FBP. Ultralow-dose CT and IRT should be further explored in clinical studies. PMID:28059562

  18. Results of an Integrative Analysis: A Call for Contextualizing HIV and AIDS Clinical Practice Guidelines to Support Evidence-Based Practice.

    PubMed

    Edwards, Nancy; Kahwa, Eulalia; Hoogeveen, Katie

    2017-12-01

    Practice guidelines aim to improve the standard of care for people living with HIV/AIDS. Successfully implementing guidelines requires tailoring them to populations served and to social and organizational influences on care. To examine dimensions of context, which nurses and midwives described as having a significant impact on their care of patients living with HIV/AIDS in Kenya, Uganda, South Africa, and Jamaica and to determine whether HIV/AIDS guidelines include adaptations congruent with these dimensions of context. Two sets of data were used. The first came from a qualitative study. In-depth interviews were conducted with purposively selected nurses, midwives, and nurse managers from 21 districts in four study countries. A coding framework was iteratively developed and themes inductively identified. Context dimensions were derived from these themes. A second data set of published guidelines for HIV/AIDS care was then assembled. Guidelines were identified through Google and PubMed searches. Using a deductive integrative analysis approach, text related to context dimensions was extracted from guidelines and categorized into problem and strategy statements. Ninety-six individuals participated in qualitative interviews. Four discrete dimensions of context were identified: health workforce adequacy, workplace exposure risk, workplace consequences for nurses living with HIV/AIDS, and the intersection of work and family life. Guidelines most often acknowledged health human resource constraints and presented mitigation strategies to offset them, and least often discussed workplace consequences and the intersections of family and work life. Guidelines should more consistently acknowledge diverse implementation contexts, propose how recommendations can be adapted to these realities, and suggest what role frontline healthcare providers have in realizing the structural changes necessary for healthier work environments and better patient care. Guideline recommendations should include more explicit advice on adapting their recommendations to different care conditions. © 2017 The Authors. Worldviews on Evidence-Based Nursing published by Wiley Periodicals, Inc. on behalf of Sigma Theta Tau International The Honor Society of Nursing.

  19. Magnet Design Considerations for Fusion Nuclear Science Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Y.; Kessel, C.; El-Guebaly, L.

    2016-06-01

    The Fusion Nuclear Science Facility (FNSF) is a nuclear confinement facility that provides a fusion environment with components of the reactor integrated together to bridge the technical gaps of burning plasma and nuclear science between the International Thermonuclear Experimental Reactor (ITER) and the demonstration power plant (DEMO). Compared with ITER, the FNSF is smaller in size but generates much higher magnetic field, i.e., 30 times higher neutron fluence with three orders of magnitude longer plasma operation at higher operating temperatures for structures surrounding the plasma. Input parameters to the magnet design from system code analysis include magnetic field of 7.5more » T at the plasma center with a plasma major radius of 4.8 m and a minor radius of 1.2 m and a peak field of 15.5 T on the toroidal field (TF) coils for the FNSF. Both low-temperature superconductors (LTS) and high-temperature superconductors (HTS) are considered for the FNSF magnet design based on the state-of-the-art fusion magnet technology. The higher magnetic field can be achieved by using the high-performance ternary restacked-rod process Nb3Sn strands for TF magnets. The circular cable-in-conduit conductor (CICC) design similar to ITER magnets and a high-aspect-ratio rectangular CICC design are evaluated for FNSF magnets, but low-activation-jacket materials may need to be selected. The conductor design concept and TF coil winding pack composition and dimension based on the horizontal maintenance schemes are discussed. Neutron radiation limits for the LTS and HTS superconductors and electrical insulation materials are also reviewed based on the available materials previously tested. The material radiation limits for FNSF magnets are defined as part of the conceptual design studies for FNSF magnets.« less

  20. A user-centered, iterative engineering approach for advanced biomass cookstove design and development

    NASA Astrophysics Data System (ADS)

    Shan, Ming; Carter, Ellison; Baumgartner, Jill; Deng, Mengsi; Clark, Sierra; Schauer, James J.; Ezzati, Majid; Li, Jiarong; Fu, Yu; Yang, Xudong

    2017-09-01

    Unclean combustion of solid fuel for cooking and other household energy needs leads to severe household air pollution and adverse health impacts in adults and children. Replacing traditional solid fuel stoves with high efficiency, low-polluting semi-gasifier stoves can potentially contribute to addressing this global problem. The success of semi-gasifier cookstove implementation initiatives depends not only on the technical performance and safety of the stove, but also the compatibility of the stove design with local cooking practices, the needs and preferences of stove users, and community economic structures. Many past stove design initiatives have failed to address one or more of these dimensions during the design process, resulting in failure of stoves to achieve long-term, exclusive use and market penetration. This study presents a user-centered, iterative engineering design approach to developing a semi-gasifier biomass cookstove for rural Chinese homes. Our approach places equal emphasis on stove performance and meeting the preferences of individuals most likely to adopt the clean stove technology. Five stove prototypes were iteratively developed following energy market and policy evaluation, laboratory and field evaluations of stove performance and user experience, and direct interactions with stove users. The most current stove prototype achieved high performance in the field on thermal efficiency (ISO Tier 3) and pollutant emissions (ISO Tier 4), and was received favorably by rural households in the Sichuan province of Southwest China. Among household cooks receiving the final prototype of the intervention stove, 88% reported lighting and using it at least once. At five months post-intervention, the semi-gasifier stoves were used at least once on an average of 68% [95% CI: 43, 93] of days. Our proposed design strategy can be applied to other stove development initiatives in China and other countries.

Top