Sample records for parametric iterative soft

  1. Soft-Input Soft-Output Modules for the Construction and Distributed Iterative Decoding of Code Networks

    NASA Technical Reports Server (NTRS)

    Benedetto, S.; Divsalar, D.; Montorsi, G.; Pollara, F.

    1998-01-01

    Soft-input soft-output building blocks (modules) are presented to construct and iteratively decode in a distributed fashion code networks, a new concept that includes, and generalizes, various forms of concatenated coding schemes.

  2. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  3. Cardiac-gated parametric images from 82 Rb PET from dynamic frames and direct 4D reconstruction.

    PubMed

    Germino, Mary; Carson, Richard E

    2018-02-01

    Cardiac perfusion PET data can be reconstructed as a dynamic sequence and kinetic modeling performed to quantify myocardial blood flow, or reconstructed as static gated images to quantify function. Parametric images from dynamic PET are conventionally not gated, to allow use of all events with lower noise. An alternative method for dynamic PET is to incorporate the kinetic model into the reconstruction algorithm itself, bypassing the generation of a time series of emission images and directly producing parametric images. So-called "direct reconstruction" can produce parametric images with lower noise than the conventional method because the noise distribution is more easily modeled in projection space than in image space. In this work, we develop direct reconstruction of cardiac-gated parametric images for 82 Rb PET with an extension of the Parametric Motion compensation OSEM List mode Algorithm for Resolution-recovery reconstruction for the one tissue model (PMOLAR-1T). PMOLAR-1T was extended to accommodate model terms to account for spillover from the left and right ventricles into the myocardium. The algorithm was evaluated on a 4D simulated 82 Rb dataset, including a perfusion defect, as well as a human 82 Rb list mode acquisition. The simulated list mode was subsampled into replicates, each with counts comparable to one gate of a gated acquisition. Parametric images were produced by the indirect (separate reconstructions and modeling) and direct methods for each of eight low-count and eight normal-count replicates of the simulated data, and each of eight cardiac gates for the human data. For the direct method, two initialization schemes were tested: uniform initialization, and initialization with the filtered iteration 1 result of the indirect method. For the human dataset, event-by-event respiratory motion compensation was included. The indirect and direct methods were compared for the simulated dataset in terms of bias and coefficient of variation as a function of iteration. Convergence of direct reconstruction was slow with uniform initialization; lower bias was achieved in fewer iterations by initializing with the filtered indirect iteration 1 images. For most parameters and regions evaluated, the direct method achieved the same or lower absolute bias at matched iteration as the indirect method, with 23%-65% lower noise. Additionally, the direct method gave better contrast between the perfusion defect and surrounding normal tissue than the indirect method. Gated parametric images from the human dataset had comparable relative performance of indirect and direct, in terms of mean parameter values per iteration. Changes in myocardial wall thickness and blood pool size across gates were readily visible in the gated parametric images, with higher contrast between myocardium and left ventricle blood pool in parametric images than gated SUV images. Direct reconstruction can produce parametric images with less noise than the indirect method, opening the potential utility of gated parametric imaging for perfusion PET. © 2017 American Association of Physicists in Medicine.

  4. Acceleration of the direct reconstruction of linear parametric images using nested algorithms.

    PubMed

    Wang, Guobao; Qi, Jinyi

    2010-03-07

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  5. An Efficient Non-iterative Bulk Parametrization of Surface Fluxes for Stable Atmospheric Conditions Over Polar Sea-Ice

    NASA Astrophysics Data System (ADS)

    Gryanik, Vladimir M.; Lüpkes, Christof

    2018-02-01

    In climate and weather prediction models the near-surface turbulent fluxes of heat and momentum and related transfer coefficients are usually parametrized on the basis of Monin-Obukhov similarity theory (MOST). To avoid iteration, required for the numerical solution of the MOST equations, many models apply parametrizations of the transfer coefficients based on an approach relating these coefficients to the bulk Richardson number Rib. However, the parametrizations that are presently used in most climate models are valid only for weaker stability and larger surface roughnesses than those documented during the Surface Heat Budget of the Arctic Ocean campaign (SHEBA). The latter delivered a well-accepted set of turbulence data in the stable surface layer over polar sea-ice. Using stability functions based on the SHEBA data, we solve the MOST equations applying a new semi-analytic approach that results in transfer coefficients as a function of Rib and roughness lengths for momentum and heat. It is shown that the new coefficients reproduce the coefficients obtained by the numerical iterative method with a good accuracy in the most relevant range of stability and roughness lengths. For small Rib, the new bulk transfer coefficients are similar to the traditional coefficients, but for large Rib they are much smaller than currently used coefficients. Finally, a possible adjustment of the latter and the implementation of the new proposed parametrizations in models are discussed.

  6. Development of suspended core soft glass fibers for far-detuned parametric conversion

    NASA Astrophysics Data System (ADS)

    Rampur, Anupamaa; Ciąćka, Piotr; Cimek, Jarosław; Kasztelanic, Rafał; Buczyński, Ryszard; Klimczak, Mariusz

    2018-04-01

    Light sources utilizing χ (2) parametric conversion combine high brightness with attractive operation wavelengths in the near and mid-infrared. In optical fibers, it is possible to use χ (3) degenerate four-wave mixing in order to obtain signal-to-idler frequency detuning of over 100 THz. We report on a test series of nonlinear soft glass suspended core fibers intended for parametric conversion of 1000-1100 nm signal wavelengths available from an array of mature lasers into the near-to-mid-infrared range of 2700-3500 nm under pumping with an erbium sub-picosecond laser system. The presented discussion includes modelling of the fiber properties, details of their physical development and characterization, and experimental tests of parametric conversion.

  7. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach

    PubMed Central

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges

    2013-01-01

    Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922

  8. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach.

    PubMed

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges

    2013-10-01

    Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.

  9. Multiscale Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Pierre, Eric Y.; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A.

    2015-01-01

    Purpose To reduce acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. Methods An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in-vivo data using the highly-undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. Results The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD) and B0 field variations in the brain was achieved in vivo for a 256×256 matrix for a total acquisition time of 10.2s, representing a 3-fold reduction in acquisition time. Conclusions The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. PMID:26132462

  10. Polycapillary lenses for soft x-ray transmission in ITER: Model, comparison with experiments, and potential application

    NASA Astrophysics Data System (ADS)

    Mazon, D.; Liegeard, C.; Jardin, A.; Barnsley, R.; Walsh, M.; O'Mullane, M.; Sirinelli, A.; Dorchies, F.

    2016-11-01

    Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plug geometry.

  11. Polycapillary lenses for soft x-ray transmission in ITER: Model, comparison with experiments, and potential application.

    PubMed

    Mazon, D; Liegeard, C; Jardin, A; Barnsley, R; Walsh, M; O'Mullane, M; Sirinelli, A; Dorchies, F

    2016-11-01

    Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plug geometry.

  12. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.

  13. Effect of contrast enhancement prior to iteration procedure on image correction for soft x-ray projection microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamsranjav, Erdenetogtokh, E-mail: ja.erdenetogtokh@gmail.com; Shiina, Tatsuo, E-mail: shiina@faculity.chiba-u.jp; Kuge, Kenichi

    2016-01-28

    Soft X-ray microscopy is well recognized as a powerful tool of high-resolution imaging for hydrated biological specimens. Projection type of it has characteristics of easy zooming function, simple optical layout and so on. However the image is blurred by the diffraction of X-rays, leading the spatial resolution to be worse. In this study, the blurred images have been corrected by an iteration procedure, i.e., Fresnel and inverse Fresnel transformations repeated. This method was confirmed by earlier studies to be effective. Nevertheless it was not enough to some images showing too low contrast, especially at high magnification. In the present study,more » we tried a contrast enhancement method to make the diffraction fringes clearer prior to the iteration procedure. The method was effective to improve the images which were not successful by iteration procedure only.« less

  14. Polycapillary lenses for soft x-ray transmission in ITER: Model, comparison with experiments, and potential application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazon, D., E-mail: Didier.Mazon@cea.fr; Jardin, A.; Liegeard, C.

    2016-11-15

    Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plugmore » geometry.« less

  15. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE PAGES

    Grout, Ray; Kolla, Hemanth; Minion, Michael; ...

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  16. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  17. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  18. Multiscale reconstruction for MR fingerprinting.

    PubMed

    Pierre, Eric Y; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A

    2016-06-01

    To reduce the acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in vivo data using the highly undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD), and B0 field variations in the brain was achieved in vivo for a 256 × 256 matrix for a total acquisition time of 10.2 s, representing a three-fold reduction in acquisition time. The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. Magn Reson Med 75:2481-2492, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. Drawing dynamical and parameters planes of iterative families and methods.

    PubMed

    Chicharro, Francisco I; Cordero, Alicia; Torregrosa, Juan R

    2013-01-01

    The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones).

  20. Least-squares reverse time migration in elastic media

    NASA Astrophysics Data System (ADS)

    Ren, Zhiming; Liu, Yang; Sen, Mrinal K.

    2017-02-01

    Elastic reverse time migration (RTM) can yield accurate subsurface information (e.g. PP and PS reflectivity) by imaging the multicomponent seismic data. However, the existing RTM methods are still insufficient to provide satisfactory results because of the finite recording aperture, limited bandwidth and imperfect illumination. Besides, the P- and S-wave separation and the polarity reversal correction are indispensable in conventional elastic RTM. Here, we propose an iterative elastic least-squares RTM (LSRTM) method, in which the imaging accuracy is improved gradually with iteration. We first use the Born approximation to formulate the elastic de-migration operator, and employ the Lagrange multiplier method to derive the adjoint equations and gradients with respect to reflectivity. Then, an efficient inversion workflow (only four forward computations needed in each iteration) is introduced to update the reflectivity. Synthetic and field data examples reveal that the proposed LSRTM method can obtain higher-quality images than the conventional elastic RTM. We also analyse the influence of model parametrizations and misfit functions in elastic LSRTM. We observe that Lamé parameters, velocity and impedance parametrizations have similar and plausible migration results when the structures of different models are correlated. For an uncorrelated subsurface model, velocity and impedance parametrizations produce fewer artefacts caused by parameter crosstalk than the Lamé coefficient parametrization. Correlation- and convolution-type misfit functions are effective when amplitude errors are involved and the source wavelet is unknown, respectively. Finally, we discuss the dependence of elastic LSRTM on migration velocities and its antinoise ability. Imaging results determine that the new elastic LSRTM method performs well as long as the low-frequency components of migration velocities are correct. The quality of images of elastic LSRTM degrades with increasing noise.

  1. Nonlinear BCJR equalizer for suppression of intrachannel nonlinearities in 40 Gb/s optical communications systems.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane

    2006-05-29

    A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.

  2. Drawing Dynamical and Parameters Planes of Iterative Families and Methods

    PubMed Central

    Chicharro, Francisco I.

    2013-01-01

    The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones). PMID:24376386

  3. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    2000-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  4. An Interactive Concatenated Turbo Coding System

    NASA Technical Reports Server (NTRS)

    Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  5. Influence of Ultra-Low-Dose and Iterative Reconstructions on the Visualization of Orbital Soft Tissues on Maxillofacial CT.

    PubMed

    Widmann, G; Juranek, D; Waldenberger, F; Schullian, P; Dennhardt, A; Hoermann, R; Steurer, M; Gassner, E-M; Puelacher, W

    2017-08-01

    Dose reduction on CT scans for surgical planning and postoperative evaluation of midface and orbital fractures is an important concern. The purpose of this study was to evaluate the variability of various low-dose and iterative reconstruction techniques on the visualization of orbital soft tissues. Contrast-to-noise ratios of the optic nerve and inferior rectus muscle and subjective scores of a human cadaver were calculated from CT with a reference dose protocol (CT dose index volume = 36.69 mGy) and a subsequent series of low-dose protocols (LDPs I-4: CT dose index volume = 4.18, 2.64, 0.99, and 0.53 mGy) with filtered back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR)-50, ASIR-100, and model-based iterative reconstruction. The Dunn Multiple Comparison Test was used to compare each combination of protocols (α = .05). Compared with the reference dose protocol with FBP, the following statistically significant differences in contrast-to-noise ratios were shown (all, P ≤ .012) for the following: 1) optic nerve: LDP-I with FBP; LDP-II with FBP and ASIR-50; LDP-III with FBP, ASIR-50, and ASIR-100; and LDP-IV with FBP, ASIR-50, and ASIR-100; and 2) inferior rectus muscle: LDP-II with FBP, LDP-III with FBP and ASIR-50, and LDP-IV with FBP, ASIR-50, and ASIR-100. Model-based iterative reconstruction showed the best contrast-to-noise ratio in all images and provided similar subjective scores for LDP-II. ASIR-50 had no remarkable effect, and ASIR-100, a small effect on subjective scores. Compared with a reference dose protocol with FBP, model-based iterative reconstruction may show similar diagnostic visibility of orbital soft tissues at a CT dose index volume of 2.64 mGy. Low-dose technology and iterative reconstruction technology may redefine current reference dose levels in maxillofacial CT. © 2017 by American Journal of Neuroradiology.

  6. Kernel approach to molecular similarity based on iterative graph similarity.

    PubMed

    Rupp, Matthias; Proschak, Ewgenij; Schneider, Gisbert

    2007-01-01

    Similarity measures for molecules are of basic importance in chemical, biological, and pharmaceutical applications. We introduce a molecular similarity measure defined directly on the annotated molecular graph, based on iterative graph similarity and optimal assignments. We give an iterative algorithm for the computation of the proposed molecular similarity measure, prove its convergence and the uniqueness of the solution, and provide an upper bound on the required number of iterations necessary to achieve a desired precision. Empirical evidence for the positive semidefiniteness of certain parametrizations of our function is presented. We evaluated our molecular similarity measure by using it as a kernel in support vector machine classification and regression applied to several pharmaceutical and toxicological data sets, with encouraging results.

  7. Design for disassembly and sustainability assessment to support aircraft end-of-life treatment

    NASA Astrophysics Data System (ADS)

    Savaria, Christian

    Gas turbine engine design is a multidisciplinary and iterative process. Many design iterations are necessary to address the challenges among the disciplines. In the creation of a new engine architecture, the design time is crucial in capturing new business opportunities. At the detail design phase, it was proven very difficult to correct an unsatisfactory design. To overcome this difficulty, the concept of Multi-Disciplinary Optimization (MDO) at the preliminary design phase (Preliminary MDO or PMDO) is used allowing more freedom to perform changes in the design. PMDO also reduces the design time at the preliminary design phase. The concept of PMDO was used was used to create parametric models, and new correlations for high pressure gas turbine housing and shroud segments towards a new design process. First, dedicated parametric models were created because of their reusability and versatility. Their ease of use compared to non-parameterized models allows more design iterations thus reduces set up and design time. Second, geometry correlations were created to minimize the number of parameters used in turbine housing and shroud segment design. Since the turbine housing and the shroud segment geometries are required in tip clearance analyses, care was taken as to not oversimplify the parametric formulation. In addition, a user interface was developed to interact with the parametric models and improve the design time. Third, the cooling flow predictions require many engine parameters (i.e. geometric and performance parameters and air properties) and a reference shroud segments. A second correlation study was conducted to minimize the number of engine parameters required in the cooling flow predictions and to facilitate the selection of a reference shroud segment. Finally, the parametric models, the geometry correlations, and the user interface resulted in a time saving of 50% and an increase in accuracy of 56% in the new design system compared to the existing design system. Also, regarding the cooling flow correlations, the number of engine parameters was reduced by a factor of 6 to create a simplified prediction model and hence a faster shroud segment selection process. None

  8. Automatic correction of intensity nonuniformity from sparseness of gradient distribution in medical images.

    PubMed

    Zheng, Yuanjie; Grossman, Murray; Awate, Suyash P; Gee, James C

    2009-01-01

    We propose to use the sparseness property of the gradient probability distribution to estimate the intensity nonuniformity in medical images, resulting in two novel automatic methods: a non-parametric method and a parametric method. Our methods are easy to implement because they both solve an iteratively re-weighted least squares problem. They are remarkably accurate as shown by our experiments on images of different imaged objects and from different imaging modalities.

  9. Automatic Correction of Intensity Nonuniformity from Sparseness of Gradient Distribution in Medical Images

    PubMed Central

    Zheng, Yuanjie; Grossman, Murray; Awate, Suyash P.; Gee, James C.

    2013-01-01

    We propose to use the sparseness property of the gradient probability distribution to estimate the intensity nonuniformity in medical images, resulting in two novel automatic methods: a non-parametric method and a parametric method. Our methods are easy to implement because they both solve an iteratively re-weighted least squares problem. They are remarkably accurate as shown by our experiments on images of different imaged objects and from different imaging modalities. PMID:20426191

  10. Double soft graviton theorems and Bondi-Metzner-Sachs symmetries

    NASA Astrophysics Data System (ADS)

    Anupam, A. H.; Kundu, Arpan; Ray, Krishnendu

    2018-05-01

    It is now well understood that Ward identities associated with the (extended) BMS algebra are equivalent to single soft graviton theorems. In this work, we show that if we consider nested Ward identities constructed out of two BMS charges, a class of double soft factorization theorems can be recovered. By making connections with earlier works in the literature, we argue that at the subleading order, these double soft graviton theorems are the so-called consecutive double soft graviton theorems. We also show how these nested Ward identities can be understood as Ward identities associated with BMS symmetries in scattering states defined around (non-Fock) vacua parametrized by supertranslations or superrotations.

  11. Multi-Innovation Gradient Iterative Locally Weighted Learning Identification for A Nonlinear Ship Maneuvering System

    NASA Astrophysics Data System (ADS)

    Bai, Wei-wei; Ren, Jun-sheng; Li, Tie-shan

    2018-06-01

    This paper explores a highly accurate identification modeling approach for the ship maneuvering motion with fullscale trial. A multi-innovation gradient iterative (MIGI) approach is proposed to optimize the distance metric of locally weighted learning (LWL), and a novel non-parametric modeling technique is developed for a nonlinear ship maneuvering system. This proposed method's advantages are as follows: first, it can avoid the unmodeled dynamics and multicollinearity inherent to the conventional parametric model; second, it eliminates the over-learning or underlearning and obtains the optimal distance metric; and third, the MIGI is not sensitive to the initial parameter value and requires less time during the training phase. These advantages result in a highly accurate mathematical modeling technique that can be conveniently implemented in applications. To verify the characteristics of this mathematical model, two examples are used as the model platforms to study the ship maneuvering.

  12. Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations

    PubMed Central

    Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad

    2013-01-01

    Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194

  13. Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method. PMID:25873987

  14. Motion estimation using the firefly algorithm in ultrasonic image sequence of soft tissue.

    PubMed

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.

  15. Role of soft-iron impellers on the mode selection in the von kármán-sodium dynamo experiment.

    PubMed

    Giesecke, André; Stefani, Frank; Gerbeth, Gunter

    2010-01-29

    A crucial point for the understanding of the von Kármán-sodium (VKS) dynamo experiment is the influence of soft-iron impellers. We present numerical simulations of a VKS-like dynamo with a localized permeability distribution that resembles the shape of the flow driving impellers. It is shown that the presence of soft-iron material essentially determines the dynamo process in the VKS experiment. An axisymmetric magnetic field mode can be explained by the combined action of the soft-iron disk and a rather small alpha effect parametrizing the induction effects of unresolved small scale flow fluctuations.

  16. New-Sum: A Novel Online ABFT Scheme For General Iterative Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram

    Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less

  17. Information-reduced Carrier Synchronization of Iterative Decoded BPSK and QPSK using Soft Decision (Extrinsic) Feedback

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Valles, Esteban; Jones, Christopher

    2008-01-01

    This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.

  18. New Soft Rock Pillar Strength Formula Derived Through Parametric FEA Using a Critical State Plasticity Model

    NASA Astrophysics Data System (ADS)

    Rastiello, Giuseppe; Federico, Francesco; Screpanti, Silvio

    2015-09-01

    Many abandoned room and pillar mines have been excavated not far from the surface of large areas of important European cities. In Rome, these excavations took place at shallow depths (3-15 m below the ground surface) in weak pyroclastic soft rocks. Many of these cavities have collapsed; others appear to be in a stable condition, although an appreciable percentage of their structural components (pillars, roofs, etc.) have shown increasing signs of distress from both the morphological and mechanical points of view. In this study, the stress-strain behaviour of soft rock pillars sustaining systems of cavities under vertical loads was numerically simulated, starting from the in situ initial conditions due to excavation of the cavities. The mechanical behaviour of the constituent material of the pillar was modelled according to the Modified Cam-Clay constitutive law (elasto-plastic with strain hardening). The influence of the pillar geometry (cross-section area, shape, and height) and mechanical parameters of the soft rock on the ultimate compressive strength of the pillar as a whole was parametrically investigated first. Based on the numerical results, an original relationship for pillar strength assessment was developed. Finally, the estimated pillar strengths according to the proposed formula and well-known formulations in the literature were compared.

  19. Stable integrated hyper-parametric oscillator based on coupled optical microcavities.

    PubMed

    Armaroli, Andrea; Feron, Patrice; Dumeige, Yannick

    2015-12-01

    We propose a flexible scheme based on three coupled optical microcavities that permits us to achieve stable oscillations in the microwave range, the frequency of which depends only on the cavity coupling rates. We find that the different dynamical regimes (soft and hard excitation) affect the oscillation intensity, but not their periods. This configuration may permit us to implement compact hyper-parametric sources on an integrated optical circuit with interesting applications in communications, sensing, and metrology.

  20. WE-D-BRF-05: Quantitative Dual-Energy CT Imaging for Proton Stopping Power Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, D; Williamson, J; Siebers, J

    2014-06-15

    Purpose: To extend the two-parameter separable basis-vector model (BVM) to estimation of proton stopping power from dual-energy CT (DECT) imaging. Methods: BVM assumes that the photon cross sections of any unknown material can be represented as a linear combination of the corresponding quantities for two bracketing basis materials. We show that both the electron density (ρe) and mean excitation energy (Iex) can be modeled by BVM, enabling stopping power to be estimated from the Bethe-Bloch equation. We have implemented an idealized post-processing dual energy imaging (pDECT) simulation consisting of monogenetic 45 keV and 80 keV scanning beams with polystyrene-water andmore » water-CaCl2 solution basis pairs for soft tissues and bony tissues, respectively. The coefficients of 24 standard ICRU tissue compositions were estimated by pDECT. The corresponding ρe, Iex, and stopping power tables were evaluated via BVM and compared to tabulated ICRU 44 reference values. Results: BVM-based pDECT was found to estimate ρe and Iex with average and maximum errors of 0.5% and 2%, respectively, for the 24 tissues. Proton stopping power values at 175 MeV, show average/maximum errors of 0.8%/1.4%. For adipose, muscle and bone, these errors result range prediction accuracies less than 1%. Conclusion: A new two-parameter separable DECT model (BVM) for estimating proton stopping power was developed. Compared to competing parametric fit DECT models, BVM has the comparable prediction accuracy without necessitating iterative solution of nonlinear equations or a sample-dependent empirical relationship between effective atomic number and Iex. Based on the proton BVM, an efficient iterative statistical DECT reconstruction model is under development.« less

  1. 2.5 TW, two-cycle IR laser pulses via frequency domain optical parametric amplification.

    PubMed

    Gruson, V; Ernotte, G; Lassonde, P; Laramée, A; Bionta, M R; Chaker, M; Di Mauro, L; Corkum, P B; Ibrahim, H; Schmidt, B E; Legaré, F

    2017-10-30

    Broadband optical parametric amplification in the IR region has reached a new milestone through the use of a non-collinear Frequency domain Optical Parametric Amplification system. We report a laser source delivering 11.6 fs pulses with 30 mJ of energy at a central wavelength of 1.8 μm at 10 Hz repetition rate corresponding to a peak power of 2.5 TW. The peak power scaling is accompanied by a pulse shortening of about 20% upon amplification due to the spectral reshaping with higher gain in the spectral wings. This source paves the way for high flux soft X-ray pulses and IR-driven laser wakefield acceleration.

  2. Application of an iterative least-squares waveform inversion of strong-motion and teleseismic records to the 1978 Tabas, Iran, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Mendoza, C.

    1991-01-01

    An iterative least-squares technique is used to simultaneously invert the strong-motion records and teleseismic P waveforms for the 1978 Tabas, Iran, earthquake to deduce the rupture history. The effects of using different data sets and different parametrizations of the problem (linear versus nonlinear) are considered. A consensus of all the inversion runs indicates a complex, multiple source for the Tabas earthquake, with four main source regions over a fault length of 90 km and an average rupture velocity of 2.5 km/sec. -from Authors

  3. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    NASA Astrophysics Data System (ADS)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  4. Multiphysics Engineering Analysis for an Integrated Design of ITER Diagnostic First Wall and Diagnostic Shield Module Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Y.; Loesser, G.; Smith, M.

    ITER diagnostic first walls (DFWs) and diagnostic shield modules (DSMs) inside the port plugs (PPs) are designed to protect diagnostic instrument and components from a harsh plasma environment and provide structural support while allowing for diagnostic access to the plasma. The design of DFWs and DSMs are driven by 1) plasma radiation and nuclear heating during normal operation 2) electromagnetic loads during plasma events and associate component structural responses. A multi-physics engineering analysis protocol for the design has been established at Princeton Plasma Physics Laboratory and it was used for the design of ITER DFWs and DSMs. The analyses weremore » performed to address challenging design issues based on resultant stresses and deflections of the DFW-DSM-PP assembly for the main load cases. ITER Structural Design Criteria for In-Vessel Components (SDC-IC) required for design by analysis and three major issues driving the mechanical design of ITER DFWs are discussed. The general guidelines for the DSM design have been established as a result of design parametric studies.« less

  5. Unity-Efficiency Parametric Down-Conversion via Amplitude Amplification.

    PubMed

    Niu, Murphy Yuezhen; Sanders, Barry C; Wong, Franco N C; Shapiro, Jeffrey H

    2017-03-24

    We propose an optical scheme, employing optical parametric down-converters interlaced with nonlinear sign gates (NSGs), that completely converts an n-photon Fock-state pump to n signal-idler photon pairs when the down-converters' crystal lengths are chosen appropriately. The proof of this assertion relies on amplitude amplification, analogous to that employed in Grover search, applied to the full quantum dynamics of single-mode parametric down-conversion. When we require that all Grover iterations use the same crystal, and account for potential experimental limitations on crystal-length precision, our optimized conversion efficiencies reach unity for 1≤n≤5, after which they decrease monotonically for n values up to 50, which is the upper limit of our numerical dynamics evaluations. Nevertheless, our conversion efficiencies remain higher than those for a conventional (no NSGs) down-converter.

  6. Further Results of Soft-Inplane Tiltrotor Aeromechanics Investigation Using Two Multibody Analyses

    NASA Technical Reports Server (NTRS)

    Masarati, Pierangelo; Quaranta, Giuseppe; Piatak, David J.; Singleton, Jeffrey D.

    2004-01-01

    This investigation focuses on the development of multibody analytical models to predict the dynamic response, aeroelastic stability, and blade loading of a soft-inplane tiltrotor wind-tunnel model. Comprehensive rotorcraft-based multibody analyses enable modeling of the rotor system to a high level of detail such that complex mechanics and nonlinear effects associated with control system geometry and joint deadband may be considered. The influence of these and other nonlinear effects on the aeromechanical behavior of the tiltrotor model are examined. A parametric study of the design parameters which may have influence on the aeromechanics of the soft-inplane rotor system are also included in this investigation.

  7. Iterated local search algorithm for solving the orienteering problem with soft time windows.

    PubMed

    Aghezzaf, Brahim; Fahim, Hassan El

    2016-01-01

    In this paper we study the orienteering problem with time windows (OPTW) and the impact of relaxing the time windows on the profit collected by the vehicle. The way of relaxing time windows adopted in the orienteering problem with soft time windows (OPSTW) that we study in this research is a late service relaxation that allows linearly penalized late services to customers. We solve this problem heuristically by considering a hybrid iterated local search. The results of the computational study show that the proposed approach is able to achieve promising solutions on the OPTW test instances available in the literature, one new best solution is found. On the newly generated test instances of the OPSTW, the results show that the profit collected by the OPSTW is better than the profit collected by the OPTW.

  8. Non-parametric identification of multivariable systems: A local rational modeling approach with application to a vibration isolation benchmark

    NASA Astrophysics Data System (ADS)

    Voorhoeve, Robbert; van der Maas, Annemiek; Oomen, Tom

    2018-05-01

    Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF identification of lightly damped mechanical systems with improved speed and accuracy. The proposed method is based on local rational models, which can efficiently handle the lightly-damped resonant dynamics. A key aspect herein is the freedom in the multivariable rational model parametrizations. Several choices for such multivariable rational model parametrizations are proposed and investigated. For systems with many inputs and outputs the required number of model parameters can rapidly increase, adversely affecting the performance of the local modeling approach. Therefore, low-order model structures are investigated. The structure of these low-order parametrizations leads to an undesired directionality in the identification problem. To address this, an iterative local rational modeling algorithm is proposed. As a special case recently developed SISO algorithms are recovered. The proposed approach is successfully demonstrated on simulations and on an active vibration isolation system benchmark, confirming good performance of the method using significantly less parameters compared with alternative approaches.

  9. Charm: Cosmic history agnostic reconstruction method

    NASA Astrophysics Data System (ADS)

    Porqueres, Natalia; Ensslin, Torsten A.

    2017-03-01

    Charm (cosmic history agnostic reconstruction method) reconstructs the cosmic expansion history in the framework of Information Field Theory. The reconstruction is performed via the iterative Wiener filter from an agnostic or from an informative prior. The charm code allows one to test the compatibility of several different data sets with the LambdaCDM model in a non-parametric way.

  10. Generation of sub-two-cycle millijoule infrared pulses in an optical parametric chirped-pulse amplifier and their application to soft x-ray absorption spectroscopy with high-flux high harmonics

    NASA Astrophysics Data System (ADS)

    Ishii, Nobuhisa; Kaneshima, Keisuke; Kanai, Teruto; Watanabe, Shuntaro; Itatani, Jiro

    2018-01-01

    An optical parametric chirped-pulse amplifier (OPCPA) based on bismuth triborate (BiB3O6, BIBO) crystals has been developed to deliver 1.5 mJ, 10.1 fs optical pulses around 1.6 μm with a repetition rate of 1 kHz and a stable carrier-envelope phase. The seed and pump pulses of the BIBO-based OPCPA are provided from two Ti:sapphire chirped-pulse amplification (CPA) systems. In both CPA systems, transmission gratings are used in the stretchers and compressors that result in a high throughput and robust operation without causing any thermal problem and optical damage. The seed pulses of the OPCPA are generated by intrapulse frequency mixing of a spectrally broadened continuum, temporally stretched to approximately 5 ps then, and amplified to more than 1.5 mJ. The amplified pulses are compressed in a fused silica block down to 10.1 fs. This BIBO-based OPCPA has been applied to high-flux high harmonic generation beyond the carbon K edge at 284 eV. The high-flux soft-x-ray continuum allows measuring the x-ray absorption near-edge structure of the carbon K edge within 2 min, which is shorter than a typical measurement time using synchrotron-based light sources. This laser-based table-top soft-x-ray source is a promising candidate for ultrafast soft x-ray spectroscopy with femtosecond to attosecond time resolution.

  11. Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems

    DTIC Science & Technology

    2003-06-01

    167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are

  12. Controlled wavelet domain sparsity for x-ray tomography

    NASA Astrophysics Data System (ADS)

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  13. A frequency dependent preconditioned wavelet method for atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny

    2013-12-01

    Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.

  14. Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.

    PubMed

    Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo

    2017-03-03

    Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.

  15. Self Consistent Ambipolar Transport and High Frequency Oscillatory Transient in Graphene Electronics

    DTIC Science & Technology

    2015-08-17

    study showed that in the presence of an ac field, THz oscillations exhibit soft resonances at a frequency roughly equal to half of the inverse of the ...exhibit soft resonances at a frequency roughly equal to half of the inverse of the carrier transit time to the LO phonon energy. It also showed that in...carriers in graphene undergo an anomalous parametric resonance. Such resonance occurs at about half the frequency ωF = 2πeF/~ωOP , where 2π/ωF is the time

  16. Singularity Crossing, Transformation of Matter Properties and the Problem of Parametrization in Field Theories

    NASA Astrophysics Data System (ADS)

    Kamenshchik, A. Yu.

    2018-03-01

    We investigate particular cosmological models, based either on tachyon fields or on perfect fluids, for which soft future singularities arise in a natural way. Our main result is the description of a smooth crossing of the soft singularity in models with an anti-Chaplygin gas or with a particular tachyon field in the presence of dust. Such a crossing is made possible by certain transformations of matter properties. We discuss and compare also different approaches to the problem of crossing of the Big Bang-Big Crunch singularities.

  17. Introducing soft systems methodology plus (SSM+): why we need it and what it can contribute.

    PubMed

    Braithwaite, Jeffrey; Hindle, Don; Iedema, Rick; Westbrook, Johanna I

    2002-01-01

    There are many complicated and seemingly intractable problems in the health care sector. Past ways to address them have involved political responses, economic restructuring, biomedical and scientific studies, and managerialist or business-oriented tools. Few methods have enabled us to develop a systematic response to problems. Our version of soft systems methodology, SSM+, seems to improve problem solving processes by providing an iterative, staged framework that emphasises collaborative learning and systems redesign involving both technical and cultural fixes.

  18. Sparsening Filter Design for Iterative Soft-Input Soft-Output Detectors

    DTIC Science & Technology

    2012-02-29

    filter/detector structure. Since the BP detector itself is unaltered from [1], it can accommodate a system employing channel codes such as LDPC encoding...considered in [1], or can readily be extended to the MIMO case with, for example, space-time coding as in [2,8]. Since our focus is on the design of...simplex method of [15], since it was already available in Matlab , via the “fminsearch” function. 6 Cost surfaces To visualize the cost surfaces, consider

  19. More than the sum of its parts: Coarse-grained peptide-lipid interactions from a simple cross-parametrization

    NASA Astrophysics Data System (ADS)

    Bereau, Tristan; Wang, Zun-Jing; Deserno, Markus

    2014-03-01

    Interfacial systems are at the core of fascinating phenomena in many disciplines, such as biochemistry, soft-matter physics, and food science. However, the parametrization of accurate, reliable, and consistent coarse-grained (CG) models for systems at interfaces remains a challenging endeavor. In the present work, we explore to what extent two independently developed solvent-free CG models of peptides and lipids—of different mapping schemes, parametrization methods, target functions, and validation criteria—can be combined by only tuning the cross-interactions. Our results show that the cross-parametrization can reproduce a number of structural properties of membrane peptides (for example, tilt and hydrophobic mismatch), in agreement with existing peptide-lipid CG force fields. We find encouraging results for two challenging biophysical problems: (i) membrane pore formation mediated by the cooperative action of several antimicrobial peptides, and (ii) the insertion and folding of the helix-forming peptide WALP23 in the membrane.

  20. Parametric boundary reconstruction algorithm for industrial CT metrology application.

    PubMed

    Yin, Zhye; Khare, Kedar; De Man, Bruno

    2009-01-01

    High-energy X-ray computed tomography (CT) systems have been recently used to produce high-resolution images in various nondestructive testing and evaluation (NDT/NDE) applications. The accuracy of the dimensional information extracted from CT images is rapidly approaching the accuracy achieved with a coordinate measuring machine (CMM), the conventional approach to acquire the metrology information directly. On the other hand, CT systems generate the sinogram which is transformed mathematically to the pixel-based images. The dimensional information of the scanned object is extracted later by performing edge detection on reconstructed CT images. The dimensional accuracy of this approach is limited by the grid size of the pixel-based representation of CT images since the edge detection is performed on the pixel grid. Moreover, reconstructed CT images usually display various artifacts due to the underlying physical process and resulting object boundaries from the edge detection fail to represent the true boundaries of the scanned object. In this paper, a novel algorithm to reconstruct the boundaries of an object with uniform material composition and uniform density is presented. There are three major benefits in the proposed approach. First, since the boundary parameters are reconstructed instead of image pixels, the complexity of the reconstruction algorithm is significantly reduced. The iterative approach, which can be computationally intensive, will be practical with the parametric boundary reconstruction. Second, the object of interest in metrology can be represented more directly and accurately by the boundary parameters instead of the image pixels. By eliminating the extra edge detection step, the overall dimensional accuracy and process time can be improved. Third, since the parametric reconstruction approach shares the boundary representation with other conventional metrology modalities such as CMM, boundary information from other modalities can be directly incorporated as prior knowledge to improve the convergence of an iterative approach. In this paper, the feasibility of parametric boundary reconstruction algorithm is demonstrated with both simple and complex simulated objects. Finally, the proposed algorithm is applied to the experimental industrial CT system data.

  1. Orthobiologics in the Foot and Ankle.

    PubMed

    Temple, H Thomas; Malinin, Theodore I

    2016-12-01

    Many allogeneic biologic materials, by themselves or in combination with cells or cell products, may be transformative in healing or regeneration of musculoskeletal bone and soft tissues. By reconfiguring the size, shape, and methods of tissue preparation to improve deliverability and storage, unique iterations of traditional tissue scaffolds have emerged. These new iterations, combined with new cell technologies, have shaped an exciting platform of regenerative products that are effective and provide a bridge to newer and better methods of providing care for orthopedic foot and ankle patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Aeroelastic Stability of A Soft-Inplane Gimballed Tiltrotor Model In Hover

    NASA Technical Reports Server (NTRS)

    Nixon, Mark W.; Langston, Chester W.; Singleton, Jeffrey D.; Piatak, David J.; Kvaternik, Raymond G.; Corso, Lawrence M.; Brown, Ross

    2001-01-01

    Soft-inplane rotor systems can significantly reduce the inplane rotor loads generated during the maneuvers of large tiltrotors, thereby reducing the strength requirements and the associated structural weight of the hub. Soft-inplane rotor systems. however, are subject to instabilities associated with ground resonance, and for tiltrotors this instability has increased complexity as compared to a conventional helicopter. Researchers at Langley Research Center and Bell Helicopter-Textron, Inc. have completed ail initial study of a soft-inplane gimballed tiltrotor model subject to ground resonance conditions in hover. Parametric variations of the rotor collective pitch and blade root damping, and their associated effects oil the model stability were examined. Also considered in the study was the effectiveness of ail active swash-plate and a generalized predictive control (GPC) algorithm for stability augmentation of the ground resonance conditions. Results of this study show that the ground resonance behavior of a gimballed soft-inplane tiltrotor can be significantly different from that of a classical soft-inplane helicopter rotor. The GPC-based active swash-plate was successfully implemented, and served to significantly augment damping of the critical modes to an acceptable value.

  3. A high throughput architecture for a low complexity soft-output demapping algorithm

    NASA Astrophysics Data System (ADS)

    Ali, I.; Wasenmüller, U.; Wehn, N.

    2015-11-01

    Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.

  4. Particle Filtering Methods for Incorporating Intelligence Updates

    DTIC Science & Technology

    2017-03-01

    methodology for incorporating intelligence updates into a stochastic model for target tracking. Due to the non -parametric assumptions of the PF...samples are taken with replacement from the remaining non -zero weighted particles at each iteration. With this methodology , a zero-weighted particle is...incorporation of information updates. A common method for incorporating information updates is Kalman filtering. However, given the probable nonlinear and non

  5. Rapid calculation of accurate atomic charges for proteins via the electronegativity equalization method.

    PubMed

    Ionescu, Crina-Maria; Geidl, Stanislav; Svobodová Vařeková, Radka; Koča, Jaroslav

    2013-10-28

    We focused on the parametrization and evaluation of empirical models for fast and accurate calculation of conformationally dependent atomic charges in proteins. The models were based on the electronegativity equalization method (EEM), and the parametrization procedure was tailored to proteins. We used large protein fragments as reference structures and fitted the EEM model parameters using atomic charges computed by three population analyses (Mulliken, Natural, iterative Hirshfeld), at the Hartree-Fock level with two basis sets (6-31G*, 6-31G**) and in two environments (gas phase, implicit solvation). We parametrized and successfully validated 24 EEM models. When tested on insulin and ubiquitin, all models reproduced quantum mechanics level charges well and were consistent with respect to population analysis and basis set. Specifically, the models showed on average a correlation of 0.961, RMSD 0.097 e, and average absolute error per atom 0.072 e. The EEM models can be used with the freely available EEM implementation EEM_SOLVER.

  6. Joint reconstruction of dynamic PET activity and kinetic parametric images using total variation constrained dictionary sparse coding

    NASA Astrophysics Data System (ADS)

    Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng

    2017-05-01

    Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.

  7. A physiology-based parametric imaging method for FDG-PET data

    NASA Astrophysics Data System (ADS)

    Scussolini, Mara; Garbarino, Sara; Sambuceti, Gianmario; Caviglia, Giacomo; Piana, Michele

    2017-12-01

    Parametric imaging is a compartmental approach that processes nuclear imaging data to estimate the spatial distribution of the kinetic parameters governing tracer flow. The present paper proposes a novel and efficient computational method for parametric imaging which is potentially applicable to several compartmental models of diverse complexity and which is effective in the determination of the parametric maps of all kinetic coefficients. We consider applications to [18 F]-fluorodeoxyglucose positron emission tomography (FDG-PET) data and analyze the two-compartment catenary model describing the standard FDG metabolization by an homogeneous tissue and the three-compartment non-catenary model representing the renal physiology. We show uniqueness theorems for both models. The proposed imaging method starts from the reconstructed FDG-PET images of tracer concentration and preliminarily applies image processing algorithms for noise reduction and image segmentation. The optimization procedure solves pixel-wise the non-linear inverse problem of determining the kinetic parameters from dynamic concentration data through a regularized Gauss-Newton iterative algorithm. The reliability of the method is validated against synthetic data, for the two-compartment system, and experimental real data of murine models, for the renal three-compartment system.

  8. Full dose reduction potential of statistical iterative reconstruction for head CT protocols in a predominantly pediatric population

    PubMed Central

    Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.

    2016-01-01

    Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425

  9. Active chainmail fabrics for soft robotic applications

    NASA Astrophysics Data System (ADS)

    Ransley, Mark; Smitham, Peter; Miodownik, Mark

    2017-08-01

    This paper introduces a novel type of smart textile with electronically responsive flexibility. The chainmail inspired fabric is modelled parametrically and simulated via a rigid body physics framework with an embedded model of temperature controlled actuation. Our model assumes that individual fabric linkages are rigid and deform only through their own actuation, thereby decoupling flexibility from stiffness. A physical prototype of the active fabric is constructed and it is shown that flexibility can be significantly controlled through actuator strains of ≤10%. Applications of these materials to soft-robotics such as dynamically reconfigurable orthoses and splints are discussed.

  10. Parametric study of sensor placement for vision-based relative navigation system of multiple spacecraft

    NASA Astrophysics Data System (ADS)

    Jeong, Junho; Kim, Seungkeun; Suk, Jinyoung

    2017-12-01

    In order to overcome the limited range of GPS-based techniques, vision-based relative navigation methods have recently emerged as alternative approaches for a high Earth orbit (HEO) or deep space missions. Therefore, various vision-based relative navigation systems use for proximity operations between two spacecraft. For the implementation of these systems, a sensor placement problem can occur on the exterior of spacecraft due to its limited space. To deal with the sensor placement, this paper proposes a novel methodology for a vision-based relative navigation based on multiple position sensitive diode (PSD) sensors and multiple infrared beacon modules. For the proposed method, an iterated parametric study is used based on the farthest point optimization (FPO) and a constrained extended Kalman filter (CEKF). Each algorithm is applied to set the location of the sensors and to estimate relative positions and attitudes according to each combination by the PSDs and beacons. After that, scores for the sensor placement are calculated with respect to parameters: the number of the PSDs, number of the beacons, and accuracy of relative estimates. Then, the best scoring candidate is determined for the sensor placement. Moreover, the results of the iterated estimation show that the accuracy improves dramatically, as the number of the PSDs increases from one to three.

  11. A Case for Soft Error Detection and Correction in Computational Chemistry.

    PubMed

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  12. A modified two-layer iteration via a boundary point approach to generalized multivalued pseudomonotone mixed variational inequalities.

    PubMed

    Saddeek, Ali Mohamed

    2017-01-01

    Most mathematical models arising in stationary filtration processes as well as in the theory of soft shells can be described by single-valued or generalized multivalued pseudomonotone mixed variational inequalities with proper convex nondifferentiable functionals. Therefore, for finding the minimum norm solution of such inequalities, the current paper attempts to introduce a modified two-layer iteration via a boundary point approach and to prove its strong convergence. The results here improve and extend the corresponding recent results announced by Badriev, Zadvornov and Saddeek (Differ. Equ. 37:934-942, 2001).

  13. Linear-scaling implementation of molecular response theory in self-consistent field electronic-structure theory.

    PubMed

    Coriani, Sonia; Høst, Stinne; Jansík, Branislav; Thøgersen, Lea; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Pawłowski, Filip; Helgaker, Trygve; Sałek, Paweł

    2007-04-21

    A linear-scaling implementation of Hartree-Fock and Kohn-Sham self-consistent field theories for the calculation of frequency-dependent molecular response properties and excitation energies is presented, based on a nonredundant exponential parametrization of the one-electron density matrix in the atomic-orbital basis, avoiding the use of canonical orbitals. The response equations are solved iteratively, by an atomic-orbital subspace method equivalent to that of molecular-orbital theory. Important features of the subspace method are the use of paired trial vectors (to preserve the algebraic structure of the response equations), a nondiagonal preconditioner (for rapid convergence), and the generation of good initial guesses (for robust solution). As a result, the performance of the iterative method is the same as in canonical molecular-orbital theory, with five to ten iterations needed for convergence. As in traditional direct Hartree-Fock and Kohn-Sham theories, the calculations are dominated by the construction of the effective Fock/Kohn-Sham matrix, once in each iteration. Linear complexity is achieved by using sparse-matrix algebra, as illustrated in calculations of excitation energies and frequency-dependent polarizabilities of polyalanine peptides containing up to 1400 atoms.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batista, Antonio J. N.; Santos, Bruno; Fernandes, Ana

    The data acquisition and control instrumentation cubicles room of the ITER tokamak will be irradiated with neutrons during the fusion reactor operation. A Virtex-6 FPGA from Xilinx (XC6VLX365T-1FFG1156C) is used on the ATCA-IO-PROCESSOR board, included in the ITER Catalog of I and C products - Fast Controllers. The Virtex-6 is a re-programmable logic device where the configuration is stored in Static RAM (SRAM), functional data stored in dedicated Block RAM (BRAM) and functional state logic in Flip-Flops. Single Event Upsets (SEU) due to the ionizing radiation of neutrons causes soft errors, unintended changes (bit-flips) to the values stored in statemore » elements of the FPGA. The SEU monitoring and soft errors repairing, when possible, were explored in this work. An FPGA built-in Soft Error Mitigation (SEM) controller detects and corrects soft errors in the FPGA configuration memory. Novel SEU sensors with Error Correction Code (ECC) detect and repair the BRAM memories. Proper management of SEU can increase reliability and availability of control instrumentation hardware for nuclear applications. The results of the tests performed using the SEM controller and the BRAM SEU sensors are presented for a Virtex-6 FPGA (XC6VLX240T-1FFG1156C) when irradiated with neutrons from the Portuguese Research Reactor (RPI), a 1 MW nuclear fission reactor operated by IST in the neighborhood of Lisbon. Results show that the proposed SEU mitigation technique is able to repair the majority of the detected SEU errors in the configuration and BRAM memories. (authors)« less

  15. An iterative algorithm for soft tissue reconstruction from truncated flat panel projections

    NASA Astrophysics Data System (ADS)

    Langan, D.; Claus, B.; Edic, P.; Vaillant, R.; De Man, B.; Basu, S.; Iatrou, M.

    2006-03-01

    The capabilities of flat panel interventional x-ray systems continue to expand, enabling a broader array of medical applications to be performed in a minimally invasive manner. Although CT is providing pre-operative 3D information, there is a need for 3D imaging of low contrast soft tissue during interventions in a number of areas including neurology, cardiac electro-physiology, and oncology. Unlike CT systems, interventional angiographic x-ray systems provide real-time large field of view 2D imaging, patient access, and flexible gantry positioning enabling interventional procedures. However, relative to CT, these C-arm flat panel systems have additional technical challenges in 3D soft tissue imaging including slower rotation speed, gantry vibration, reduced lateral patient field of view (FOV), and increased scatter. The reduced patient FOV often results in significant data truncation. Reconstruction of truncated (incomplete) data is known an "interior problem", and it is mathematically impossible to obtain an exact reconstruction. Nevertheless, it is an important problem in 3D imaging on a C-arm to address the need to generate a 3D reconstruction representative of the object being imaged with minimal artifacts. In this work we investigate the application of an iterative Maximum Likelihood Transmission (MLTR) algorithm to truncated data. We also consider truncated data with limited views for cardiac imaging where the views are gated by the electrocardiogram(ECG) to combat motion artifacts.

  16. Modeling and Simulation of a Parametrically Resonant Micromirror With Duty-Cycled Excitation.

    PubMed

    Shahid, Wajiha; Qiu, Zhen; Duan, Xiyu; Li, Haijun; Wang, Thomas D; Oldham, Kenn R

    2014-12-01

    High frequency large scanning angle electrostatically actuated microelectromechanical systems (MEMS) mirrors are used in a variety of applications involving fast optical scanning. A 1-D parametrically resonant torsional micromirror for use in biomedical imaging is analyzed here with respect to operation by duty-cycled square waves. Duty-cycled square wave excitation can have significant advantages for practical mirror regulation and/or control. The mirror's nonlinear dynamics under such excitation is analyzed in a Hill's equation form. This form is used to predict stability regions (the voltage-frequency relationship) of parametric resonance behavior over large scanning angles using iterative approximations for nonlinear capacitance behavior of the mirror. Numerical simulations are also performed to obtain the mirror's frequency response over several voltages for various duty cycles. Frequency sweeps, stability results, and duty cycle trends from both analytical and simulation methods are compared with experimental results. Both analytical models and simulations show good agreement with experimental results over the range of duty cycled excitations tested. This paper discusses the implications of changing amplitude and phase with duty cycle for robust open-loop operation and future closed-loop operating strategies.

  17. Impact of view reduction in CT on radiation dose for patients

    NASA Astrophysics Data System (ADS)

    Parcero, E.; Flores, L.; Sánchez, M. G.; Vidal, V.; Verdú, G.

    2017-08-01

    Iterative methods have become a hot topic of research in computed tomography (CT) imaging because of their capacity to resolve the reconstruction problem from a limited number of projections. This allows the reduction of radiation exposure on patients during the data acquisition. The reconstruction time and the high radiation dose imposed on patients are the two major drawbacks in CT. To solve them effectively we adapted the method for sparse linear equations and sparse least squares (LSQR) with soft threshold filtering (STF) and the fast iterative shrinkage-thresholding algorithm (FISTA) to computed tomography reconstruction. The feasibility of the proposed methods is demonstrated numerically.

  18. Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices

    NASA Astrophysics Data System (ADS)

    Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie

    2016-09-01

    Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes’ (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body.

  19. Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices

    PubMed Central

    Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie

    2016-01-01

    Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes’ (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body. PMID:27670953

  20. Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices.

    PubMed

    Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie

    2016-09-27

    Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes' (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body.

  1. A comparative study of coarse-graining methods for polymeric fluids: Mori-Zwanzig vs. iterative Boltzmann inversion vs. stochastic parametric optimization

    NASA Astrophysics Data System (ADS)

    Li, Zhen; Bian, Xin; Yang, Xiu; Karniadakis, George Em

    2016-07-01

    We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.

  2. A comparative study of coarse-graining methods for polymeric fluids: Mori-Zwanzig vs. iterative Boltzmann inversion vs. stochastic parametric optimization.

    PubMed

    Li, Zhen; Bian, Xin; Yang, Xiu; Karniadakis, George Em

    2016-07-28

    We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.

  3. ITER-relevant calibration technique for soft x-ray spectrometer.

    PubMed

    Rzadkiewicz, J; Książek, I; Zastrow, K-D; Coffey, I H; Jakubowska, K; Lawson, K D

    2010-10-01

    The ITER-oriented JET research program brings new requirements for the low-Z impurity monitoring, in particular for the Be—the future main wall component of JET and ITER. Monitoring based on Bragg spectroscopy requires an absolute sensitivity calibration, which is challenging for large tokamaks. This paper describes both “component-by-component” and “continua” calibration methods used for the Be IV channel (75.9 Å) of the Bragg rotor spectrometer deployed on JET. The calibration techniques presented here rely on multiorder reflectivity calculations and measurements of continuum radiation emitted from helium plasmas. These offer excellent conditions for the absolute photon flux calibration due to their low level of impurities. It was found that the component-by-component method gives results that are four times higher than those obtained by means of the continua method. A better understanding of this discrepancy requires further investigations.

  4. Tuning without over-tuning: parametric uncertainty quantification for the NEMO ocean model

    NASA Astrophysics Data System (ADS)

    Williamson, Daniel B.; Blaker, Adam T.; Sinha, Bablu

    2017-04-01

    In this paper we discuss climate model tuning and present an iterative automatic tuning method from the statistical science literature. The method, which we refer to here as iterative refocussing (though also known as history matching), avoids many of the common pitfalls of automatic tuning procedures that are based on optimisation of a cost function, principally the over-tuning of a climate model due to using only partial observations. This avoidance comes by seeking to rule out parameter choices that we are confident could not reproduce the observations, rather than seeking the model that is closest to them (a procedure that risks over-tuning). We comment on the state of climate model tuning and illustrate our approach through three waves of iterative refocussing of the NEMO (Nucleus for European Modelling of the Ocean) ORCA2 global ocean model run at 2° resolution. We show how at certain depths the anomalies of global mean temperature and salinity in a standard configuration of the model exceeds 10 standard deviations away from observations and show the extent to which this can be alleviated by iterative refocussing without compromising model performance spatially. We show how model improvements can be achieved by simultaneously perturbing multiple parameters, and illustrate the potential of using low-resolution ensembles to tune NEMO ORCA configurations at higher resolutions.

  5. Characterisation of soft magnetic materials by measurement: Evaluation of uncertainties up to 1.8 T and 9 kHz

    NASA Astrophysics Data System (ADS)

    Elfgen, S.; Franck, D.; Hameyer, K.

    2018-04-01

    Magnetic measurements are indispensable for the characterization of soft magnetic material used e.g. in electrical machines. Characteristic values are used as quality control during production and for the parametrization of material models. Uncertainties and errors in the measurements are reflected directly in the parameters of the material models. This can result in over-dimensioning and inaccuracies in simulations for the design of electrical machines. Therefore, existing influencing factors in the characterization of soft magnetic materials are named and their resulting uncertainties contributions studied. The analysis of the resulting uncertainty contributions can serve the operator as additional selection criteria for different measuring sensors. The investigation is performed for measurements within and outside the currently prescribed standard, using a Single sheet tester and its impact on the identification of iron loss parameter is studied.

  6. A parametric study of helium retention in beryllium and its effect on deuterium retention

    NASA Astrophysics Data System (ADS)

    Alegre, D.; Baldwin, M. J.; Simmonds, M.; Nishijima, D.; Hollmann, E. M.; Brezinsek, S.; Doerner, R. P.

    2017-12-01

    Beryllium samples have been exposed in the PISCES-B linear plasma device to conditions relevant to the International Thermonuclear Experimental Reactor (ITER) in pure He, D, and D/He mixed plasmas. Except at intermediate sample exposure temperatures (573-673 K) He addition to a D plasma is found to have a beneficial effect as it reduces the D retention in Be (up to ˜55%), although the mechanism is unclear. Retention of He is typically around 1020-1021 He m-2, and is affected primarily by the Be surface temperature during exposition, by the ion fluence at <500 K exposure, but not by the ion impact energy at 573 K. Contamination of the Be surface with high-Z elements from the mask of the sample holder in pure He plasmas is also observed under certain conditions, and leads to unexpectedly large He retention values, as well as changes in the surface morphology. An estimation of the tritium retention in the Be first wall of ITER is provided, being sufficiently low to allow a safe operation of ITER.

  7. Nonrigid iterative closest points for registration of 3D biomedical surfaces

    NASA Astrophysics Data System (ADS)

    Liang, Luming; Wei, Mingqiang; Szymczak, Andrzej; Petrella, Anthony; Xie, Haoran; Qin, Jing; Wang, Jun; Wang, Fu Lee

    2018-01-01

    Advanced 3D optical and laser scanners bring new challenges to computer graphics. We present a novel nonrigid surface registration algorithm based on Iterative Closest Point (ICP) method with multiple correspondences. Our method, called the Nonrigid Iterative Closest Points (NICPs), can be applied to surfaces of arbitrary topology. It does not impose any restrictions on the deformation, e.g. rigidity or articulation. Finally, it does not require parametrization of input meshes. Our method is based on an objective function that combines distance and regularization terms. Unlike the standard ICP, the distance term is determined based on multiple two-way correspondences rather than single one-way correspondences between surfaces. A Laplacian-based regularization term is proposed to take full advantage of multiple two-way correspondences. This term regularizes the surface movement by enforcing vertices to move coherently with their 1-ring neighbors. The proposed method achieves good performances when no global pose differences or significant amount of bending exists in the models, for example, families of similar shapes, like human femur and vertebrae models.

  8. Status on Iterative Transform Phase Retrieval Applied to the GBT Data

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Aronstein, David; Smith, Scott; Shiri, Ron; Hollis, Jan M.; Lyons, Richard; Prestage, Richard; Hunter, Todd; Ghigo, Frank; Nikolic, Bojan

    2007-01-01

    This slide presentation reviews the use of iterative transform phase retrieval in the analysis of the Green Bank Radio Telescope (GBT) Data. It reviews the NASA projects that have used phase retrieval, and the testbed for the algorithm to be used for the James Webb Space Telescope. It shows the comparison of phase retrieval with an interferometer, and reviews the two approaches used for phase retrieval, iterative transform (ITA) or parametric (non-linear least squares model fitting). The concept of ITA Phase Retrieval is reviewed, and the application to Radio Antennas is reviewed. The presentation also examines the National Radio Astronomy Observatory (NRAO) data from the GBT, and the Fourier model that NRAO uses to analyze the data. The challenge for ITA phase retrieval is reviewed, and the coherent approximation for incoherent data is shown. The validity of the approximation is good for a large tilt. There is a review of the proof of concept of the Phase Review simulation using the input wavefront, and the initial sampling parameters estimate from the focused GBT data.

  9. Testing the Impulsiveness of Solar Flare Heating through Analysis of Dynamic Atmospheric Response

    NASA Astrophysics Data System (ADS)

    Newton, E. K.; Emslie, A. G.; Mariska, J. T.

    1996-03-01

    One crucial test of a solar flare energy transport model is its ability to reproduce the characteristics of the atmospheric motions inferred from soft X-ray line spectra. Using a recently developed diagnostic, the velocity differential emission measure (VDEM), we can obtain from observations a physical measure of the amount of soft X-ray mitting plasma flowing at each velocity, v, and hence the total momentum of the upflowing plasma, without approximation or parametric fitting. We have correlated solar hard X-ray emission profiles by the Yohkoh Hard X-ray telescope with the mass and momentum histories inferred from soft X-ray line profiles observed by the Yohkoh Bragg crystal spectrometers. For suitably impulsive hard X-ray emission, an analysis of the hydrodynamic equations predicts a proportionality between the hard X-ray intensity and the second time derivative of the soft X-ray mitting plasma's momentum. This relationship is borne out by an analysis of 18 disk-center impulsive flares of varying durations, thereby lending support to the hypothesis that a prompt energy deposition mechanism, such as an energetic electron flux, is indeed responsible for the soft X-ray response observed in the rise phase of sufficiently impulsive solar flares.

  10. Kinetic analysis of reactions of Si-based epoxy resins by near-infrared spectroscopy, 13C NMR and soft-hard modelling.

    PubMed

    Garrido, Mariano; Larrechi, Maria Soledad; Rius, F Xavier; Mercado, Luis Adolfo; Galià, Marina

    2007-02-05

    Soft- and hard-modelling strategy was applied to near-infrared spectroscopy data obtained from monitoring the reaction between glycidyloxydimethylphenyl silane, a silicon-based epoxy monomer, and aniline. On the basis of the pure soft-modelling approach and previous chemical knowledge, a kinetic model for the reaction was proposed. Then, multivariate curve resolution-alternating least squares optimization was carried out under a hard constraint, that compels the concentration profiles to fulfil the proposed kinetic model at each iteration of the optimization process. In this way, the concentration profiles of each species and the corresponding kinetic rate constants of the reaction, unpublished until now, were obtained. The results obtained were contrasted with 13C NMR. The joint interval test of slope and intercept for detecting bias was not significant (alpha=5%).

  11. Soft-decision decoding techniques for linear block codes and their error performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1996-01-01

    The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.

  12. Aeroelastic considerations for torsionally soft rotors

    NASA Technical Reports Server (NTRS)

    Mantay, W. R.; Yeager, W. T., Jr.

    1986-01-01

    A research study was initiated to systematically determine the impact of selected blade tip geometric parameters on conformable rotor performance and loads characteristics. The model articulated rotors included baseline and torsionally soft blades with interchangeable tips. Seven blade tip designs were evaluated on the baseline rotor and six tip designs were tested on the torsionally soft blades. The designs incorporated a systemmatic variation in geometric parameters including sweep, taper, and anhedral. The rotors were evaluated in the NASA Langley Transonic Dynamics Tunnel at several advance ratios, lift and propulsive force values, and tip Mach numbers. A track sensitivity study was also conducted at several advance ratios for both rotors. Based on the test results, tip parameter variations generated significant rotor performance and loads differences for both baseline and torsionally soft blades. Azimuthal variation of elastic twist generated by variations in the tip parameters strongly correlated with rotor performance and loads, but the magnitude of advancing blade elastic twist did not. In addition, fixed system vibratory loads and rotor track for potential conformable rotor candidates appears very sensitive to parametric rotor changes.

  13. Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates

    NASA Technical Reports Server (NTRS)

    Peffley, Al F.

    1991-01-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.

  14. Assessment of vulnerable plaque composition by matching the deformation of a parametric plaque model to measured plaque deformation.

    PubMed

    Baldewsing, Radj A; Schaar, Johannes A; Mastik, Frits; Oomens, Cees W J; van der Steen, Antonius F W

    2005-04-01

    Intravascular ultrasound (IVUS) elastography visualizes local radial strain of arteries in so-called elastograms to detect rupture-prone plaques. However, due to the unknown arterial stress distribution these elastograms cannot be directly interpreted as a morphology and material composition image. To overcome this limitation we have developed a method that reconstructs a Young's modulus image from an elastogram. This method is especially suited for thin-cap fibroatheromas (TCFAs), i.e., plaques with a media region containing a lipid pool covered by a cap. Reconstruction is done by a minimization algorithm that matches the strain image output, calculated with a parametric finite element model (PFEM) representation of a TCFA, to an elastogram by iteratively updating the PFEM geometry and material parameters. These geometry parameters delineate the TCFA media, lipid pool and cap regions by circles. The material parameter for each region is a Young's modulus, EM, EL, and EC, respectively. The method was successfully tested on computer-simulated TCFAs (n = 2), one defined by circles, the other by tracing TCFA histology, and additionally on a physical phantom (n = 1) having a stiff wall (measured EM = 16.8 kPa) with an eccentric soft region (measured EL = 4.2 kPa). Finally, it was applied on human coronary plaques in vitro (n = 1) and in vivo (n = 1). The corresponding simulated and measured elastograms of these plaques showed radial strain values from 0% up to 2% at a pressure differential of 20, 20, 1, 20, and 1 mmHg respectively. The used/reconstructed Young's moduli [kPa] were for the circular plaque EL = 50/66, EM = 1500/1484, EC = 2000/2047, for the traced plaque EL = 25/1, EM = 1000/1148, EC = 1500/1491, for the phantom EL = 4.2/4 kPa, EM = 16.8/16, for the in vitro plaque EL = n.a./29, EM = n.a./647, EC = n.a./1784 kPa and for the in vivo plaque EL = n.a./2, EM = n.a./188, Ec = n.a./188 kPa.

  15. Integrating diffusion maps with umbrella sampling: Application to alanine dipeptide

    NASA Astrophysics Data System (ADS)

    Ferguson, Andrew L.; Panagiotopoulos, Athanassios Z.; Debenedetti, Pablo G.; Kevrekidis, Ioannis G.

    2011-04-01

    Nonlinear dimensionality reduction techniques can be applied to molecular simulation trajectories to systematically extract a small number of variables with which to parametrize the important dynamical motions of the system. For molecular systems exhibiting free energy barriers exceeding a few kBT, inadequate sampling of the barrier regions between stable or metastable basins can lead to a poor global characterization of the free energy landscape. We present an adaptation of a nonlinear dimensionality reduction technique known as the diffusion map that extends its applicability to biased umbrella sampling simulation trajectories in which restraining potentials are employed to drive the system into high free energy regions and improve sampling of phase space. We then propose a bootstrapped approach to iteratively discover good low-dimensional parametrizations by interleaving successive rounds of umbrella sampling and diffusion mapping, and we illustrate the technique through a study of alanine dipeptide in explicit solvent.

  16. Dense motion estimation using regularization constraints on local parametric models.

    PubMed

    Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein

    2004-11-01

    This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.

  17. Designing I*CATch: A Multipurpose, Education-Friendly Construction Kit for Physical and Wearable Computing

    ERIC Educational Resources Information Center

    Ngai, Grace; Chan, Stephen C. F.; Leong, Hong Va; Ng, Vincent T. Y.

    2013-01-01

    This article presents the design and development of i*CATch, a construction kit for physical and wearable computing that was designed to be scalable, plug-and-play, and to provide support for iterative and exploratory learning. It consists of a standardized construction interface that can be adapted for a wide range of soft textiles or electronic…

  18. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    NASA Astrophysics Data System (ADS)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  19. Modeling and Simulation of a Parametrically Resonant Micromirror With Duty-Cycled Excitation

    PubMed Central

    Shahid, Wajiha; Qiu, Zhen; Duan, Xiyu; Li, Haijun; Wang, Thomas D.; Oldham, Kenn R.

    2014-01-01

    High frequency large scanning angle electrostatically actuated microelectromechanical systems (MEMS) mirrors are used in a variety of applications involving fast optical scanning. A 1-D parametrically resonant torsional micromirror for use in biomedical imaging is analyzed here with respect to operation by duty-cycled square waves. Duty-cycled square wave excitation can have significant advantages for practical mirror regulation and/or control. The mirror’s nonlinear dynamics under such excitation is analyzed in a Hill’s equation form. This form is used to predict stability regions (the voltage-frequency relationship) of parametric resonance behavior over large scanning angles using iterative approximations for nonlinear capacitance behavior of the mirror. Numerical simulations are also performed to obtain the mirror’s frequency response over several voltages for various duty cycles. Frequency sweeps, stability results, and duty cycle trends from both analytical and simulation methods are compared with experimental results. Both analytical models and simulations show good agreement with experimental results over the range of duty cycled excitations tested. This paper discusses the implications of changing amplitude and phase with duty cycle for robust open-loop operation and future closed-loop operating strategies. PMID:25506188

  20. Comparison of soft-input-soft-output detection methods for dual-polarized quadrature duobinary system

    NASA Astrophysics Data System (ADS)

    Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan

    2018-02-01

    Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.

  1. Iterative local Gaussian clustering for expressed genes identification linked to malignancy of human colorectal carcinoma

    PubMed Central

    Wasito, Ito; Hashim, Siti Zaiton M; Sukmaningrum, Sri

    2007-01-01

    Gene expression profiling plays an important role in the identification of biological and clinical properties of human solid tumors such as colorectal carcinoma. Profiling is required to reveal underlying molecular features for diagnostic and therapeutic purposes. A non-parametric density-estimation-based approach called iterative local Gaussian clustering (ILGC), was used to identify clusters of expressed genes. We used experimental data from a previous study by Muro and others consisting of 1,536 genes in 100 colorectal cancer and 11 normal tissues. In this dataset, the ILGC finds three clusters, two large and one small gene clusters, similar to their results which used Gaussian mixture clustering. The correlation of each cluster of genes and clinical properties of malignancy of human colorectal cancer was analysed for the existence of tumor or normal, the existence of distant metastasis and the existence of lymph node metastasis. PMID:18305825

  2. Iterative local Gaussian clustering for expressed genes identification linked to malignancy of human colorectal carcinoma.

    PubMed

    Wasito, Ito; Hashim, Siti Zaiton M; Sukmaningrum, Sri

    2007-12-30

    Gene expression profiling plays an important role in the identification of biological and clinical properties of human solid tumors such as colorectal carcinoma. Profiling is required to reveal underlying molecular features for diagnostic and therapeutic purposes. A non-parametric density-estimation-based approach called iterative local Gaussian clustering (ILGC), was used to identify clusters of expressed genes. We used experimental data from a previous study by Muro and others consisting of 1,536 genes in 100 colorectal cancer and 11 normal tissues. In this dataset, the ILGC finds three clusters, two large and one small gene clusters, similar to their results which used Gaussian mixture clustering. The correlation of each cluster of genes and clinical properties of malignancy of human colorectal cancer was analysed for the existence of tumor or normal, the existence of distant metastasis and the existence of lymph node metastasis.

  3. Conditional Anomaly Detection with Soft Harmonic Functions

    PubMed Central

    Valko, Michal; Kveton, Branislav; Valizadegan, Hamed; Cooper, Gregory F.; Hauskrecht, Milos

    2012-01-01

    In this paper, we consider the problem of conditional anomaly detection that aims to identify data instances with an unusual response or a class label. We develop a new non-parametric approach for conditional anomaly detection based on the soft harmonic solution, with which we estimate the confidence of the label to detect anomalous mislabeling. We further regularize the solution to avoid the detection of isolated examples and examples on the boundary of the distribution support. We demonstrate the efficacy of the proposed method on several synthetic and UCI ML datasets in detecting unusual labels when compared to several baseline approaches. We also evaluate the performance of our method on a real-world electronic health record dataset where we seek to identify unusual patient-management decisions. PMID:25309142

  4. Conditional Anomaly Detection with Soft Harmonic Functions.

    PubMed

    Valko, Michal; Kveton, Branislav; Valizadegan, Hamed; Cooper, Gregory F; Hauskrecht, Milos

    2011-01-01

    In this paper, we consider the problem of conditional anomaly detection that aims to identify data instances with an unusual response or a class label. We develop a new non-parametric approach for conditional anomaly detection based on the soft harmonic solution, with which we estimate the confidence of the label to detect anomalous mislabeling. We further regularize the solution to avoid the detection of isolated examples and examples on the boundary of the distribution support. We demonstrate the efficacy of the proposed method on several synthetic and UCI ML datasets in detecting unusual labels when compared to several baseline approaches. We also evaluate the performance of our method on a real-world electronic health record dataset where we seek to identify unusual patient-management decisions.

  5. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  6. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pei, Yuru, E-mail: peiyuru@cis.pku.edu.cn; Ai, Xin

    Purpose: Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. Methods: The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3Dmore » exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. Results: The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. Conclusions: The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.« less

  7. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images.

    PubMed

    Pei, Yuru; Ai, Xingsheng; Zha, Hongbin; Xu, Tianmin; Ma, Gengyu

    2016-09-01

    Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.

  8. Transverse vetoes with rapidity cutoff in SCET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hornig, Andrew; Kang, Daekyoung; Makris, Yiannis

    We consider di-jet production in hadron collisions where a transverse veto is imposed on radiation for (pseudo-)rapidities in the central region only, where this central region is defined with rapidity cutoff. For the case where the transverse measurement (e.g., transverse energy or min p T for jet veto) is parametrically larger relative to the typical transverse momentum beyond the cutoff, the cross section is insensitive to the cutoff parameter and is factorized in terms of collinear and soft degrees of freedom. The virtuality for these degrees of freedom is set by the transverse measurement, as in typical transverse-momentum dependent observablesmore » such as Drell-Yan, Higgs production, and the event shape broadening. This paper focuses on the other region, where the typical transverse momentum below and beyond the cutoff is of similar size. In this region the rapidity cutoff further resolves soft radiation into (u)soft and soft-collinear radiation with different rapidities but identical virtuality. This gives rise to rapidity logarithms of the rapidity cutoff parameter which we resum using renormalization group methods. We factorize the cross section in this region in terms of soft and collinear functions in the framework of soft-collinear effective theory, then further refactorize the soft function as a convolution of the (u)soft and soft-collinear functions. All these functions are calculated at one-loop order. As an example, we calculate a differential cross section for a specific partonic channel, qq ' → qq ' , for the jet shape angularities and show that the refactorization allows us to resum the rapidity logarithms and significantly reduce theoretical uncertainties in the jet shape spectrum.« less

  9. Transverse vetoes with rapidity cutoff in SCET

    DOE PAGES

    Hornig, Andrew; Kang, Daekyoung; Makris, Yiannis; ...

    2017-12-11

    We consider di-jet production in hadron collisions where a transverse veto is imposed on radiation for (pseudo-)rapidities in the central region only, where this central region is defined with rapidity cutoff. For the case where the transverse measurement (e.g., transverse energy or min p T for jet veto) is parametrically larger relative to the typical transverse momentum beyond the cutoff, the cross section is insensitive to the cutoff parameter and is factorized in terms of collinear and soft degrees of freedom. The virtuality for these degrees of freedom is set by the transverse measurement, as in typical transverse-momentum dependent observablesmore » such as Drell-Yan, Higgs production, and the event shape broadening. This paper focuses on the other region, where the typical transverse momentum below and beyond the cutoff is of similar size. In this region the rapidity cutoff further resolves soft radiation into (u)soft and soft-collinear radiation with different rapidities but identical virtuality. This gives rise to rapidity logarithms of the rapidity cutoff parameter which we resum using renormalization group methods. We factorize the cross section in this region in terms of soft and collinear functions in the framework of soft-collinear effective theory, then further refactorize the soft function as a convolution of the (u)soft and soft-collinear functions. All these functions are calculated at one-loop order. As an example, we calculate a differential cross section for a specific partonic channel, qq ' → qq ' , for the jet shape angularities and show that the refactorization allows us to resum the rapidity logarithms and significantly reduce theoretical uncertainties in the jet shape spectrum.« less

  10. Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.

    PubMed

    Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong

    2011-09-01

    Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2018-05-01

    The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.

  12. Parametric Thermal and Flow Analysis of ITER Diagnostic Shield Module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khodak, A.; Zhai, Y.; Wang, W.

    As part of the diagnostic port plug assembly, the ITER Diagnostic Shield Module (DSM) is designed to provide mechanical support and the plasma shielding while allowing access to plasma diagnostics. Thermal and hydraulic analysis of the DSM was performed using a conjugate heat transfer approach, in which heat transfer was resolved in both solid and liquid parts, and simultaneously, fluid dynamics analysis was performed only in the liquid part. ITER Diagnostic First Wall (DFW) and cooling tubing were also included in the analysis. This allowed direct modeling of the interface between DSM and DFW, and also direct assessment of themore » coolant flow distribution between the parts of DSM and DFW to ensure DSM design meets the DFW cooling requirements. Design of the DSM included voids filled with Boron Carbide pellets, allowing weight reduction while keeping shielding capability of the DSM. These voids were modeled as a continuous solid with smeared material properties using analytical relation for thermal conductivity. Results of the analysis lead to design modifications improving heat transfer efficiency of the DSM. Furthermore, the effect of design modifications on thermal performance as well as effect of Boron Carbide will be presented.« less

  13. An Iterative Local Updating Ensemble Smoother for Estimation and Uncertainty Assessment of Hydrologic Model Parameters With Multimodal Distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Lin, Guang; Li, Weixuan; Wu, Laosheng; Zeng, Lingzao

    2018-03-01

    Ensemble smoother (ES) has been widely used in inverse modeling of hydrologic systems. However, for problems where the distribution of model parameters is multimodal, using ES directly would be problematic. One popular solution is to use a clustering algorithm to identify each mode and update the clusters with ES separately. However, this strategy may not be very efficient when the dimension of parameter space is high or the number of modes is large. Alternatively, we propose in this paper a very simple and efficient algorithm, i.e., the iterative local updating ensemble smoother (ILUES), to explore multimodal distributions of model parameters in nonlinear hydrologic systems. The ILUES algorithm works by updating local ensembles of each sample with ES to explore possible multimodal distributions. To achieve satisfactory data matches in nonlinear problems, we adopt an iterative form of ES to assimilate the measurements multiple times. Numerical cases involving nonlinearity and multimodality are tested to illustrate the performance of the proposed method. It is shown that overall the ILUES algorithm can well quantify the parametric uncertainties of complex hydrologic models, no matter whether the multimodal distribution exists.

  14. Parametric Thermal and Flow Analysis of ITER Diagnostic Shield Module

    DOE PAGES

    Khodak, A.; Zhai, Y.; Wang, W.; ...

    2017-06-19

    As part of the diagnostic port plug assembly, the ITER Diagnostic Shield Module (DSM) is designed to provide mechanical support and the plasma shielding while allowing access to plasma diagnostics. Thermal and hydraulic analysis of the DSM was performed using a conjugate heat transfer approach, in which heat transfer was resolved in both solid and liquid parts, and simultaneously, fluid dynamics analysis was performed only in the liquid part. ITER Diagnostic First Wall (DFW) and cooling tubing were also included in the analysis. This allowed direct modeling of the interface between DSM and DFW, and also direct assessment of themore » coolant flow distribution between the parts of DSM and DFW to ensure DSM design meets the DFW cooling requirements. Design of the DSM included voids filled with Boron Carbide pellets, allowing weight reduction while keeping shielding capability of the DSM. These voids were modeled as a continuous solid with smeared material properties using analytical relation for thermal conductivity. Results of the analysis lead to design modifications improving heat transfer efficiency of the DSM. Furthermore, the effect of design modifications on thermal performance as well as effect of Boron Carbide will be presented.« less

  15. Elasto-Plastic Behavior of Aluminum Foams Subjected to Compression Loading

    NASA Astrophysics Data System (ADS)

    Silva, H. M.; Carvalho, C. D.; Peixinho, N. R.

    2017-05-01

    The non-linear behavior of uniform-size cellular foams made of aluminum is investigated when subjected to compressive loads while comparing numerical results obtained in the Finite Element Method software (FEM) ANSYS workbench and ANSYS Mechanical APDL (ANSYS Parametric Design Language). The numerical model is built on AUTODESK INVENTOR, being imported into ANSYS and solved by the Newton-Raphson iterative method. The most similar conditions were used in ANSYS mechanical and ANSYS workbench, as possible. The obtained numerical results and the differences between the two programs are presented and discussed

  16. Four-level conservative finite-difference schemes for Boussinesq paradigm equation

    NASA Astrophysics Data System (ADS)

    Kolkovska, N.

    2013-10-01

    In this paper a two-parametric family of four level conservative finite difference schemes is constructed for the multidimensional Boussinesq paradigm equation. The schemes are explicit in the sense that no inner iterations are needed for evaluation of the numerical solution. The preservation of the discrete energy with this method is proved. The schemes have been numerically tested on one soliton propagation model and two solitons interaction model. The numerical experiments demonstrate that the proposed family of schemes has second order of convergence in space and time steps in the discrete maximal norm.

  17. Peri-implant soft tissue colour around titanium and zirconia abutments: a prospective randomized controlled clinical study.

    PubMed

    Cosgarea, Raluca; Gasparik, Cristina; Dudea, Diana; Culic, Bogdan; Dannewitz, Bettina; Sculean, Anton

    2015-05-01

    To objectively determine the difference in colour between the peri-implant soft tissue at titanium and zirconia abutments. Eleven patients, each with two contralaterally inserted osteointegrated dental implants, were included in this study. The implants were restored either with titanium abutments and porcelain-fused-to-metal crowns, or with zirconia abutments and ceramic crowns. Prior and after crown cementation, multi-spectral images of the peri-implant soft tissues and the gingiva of the neighbouring teeth were taken with a colorimeter. The colour parameters L*, a*, b*, c* and the colour differences ΔE were calculated. Descriptive statistics, including non-parametric tests and correlation coefficients, were used for statistical analyses of the data. Compared to the gingiva of the neighbouring teeth, the peri-implant soft tissue around titanium and zirconia (test group), showed distinguishable ΔE both before and after crown cementation. Colour differences around titanium were statistically significant different (P = 0.01) only at 1 mm prior to crown cementation compared to zirconia. Compared to the gingiva of the neighbouring teeth, statistically significant (P < 0.01) differences were found for all colour parameter, either before or after crown cementation for both abutments; more significant differences were registered for titanium abutments. Tissue thickness correlated positively with c*-values for titanium at 1 mm and 2 mm from the gingival margin. Within their limits, the present data indicate that: (i) The peri-implant soft tissue around titanium and zirconia showed colour differences when compared to the soft tissue around natural teeth, and (ii) the peri-implant soft tissue around zirconia demonstrated a better colour match to the soft tissue at natural teeth than titanium. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. 3D printed soft parallel actuator

    NASA Astrophysics Data System (ADS)

    Zolfagharian, Ali; Kouzani, Abbas Z.; Khoo, Sui Yang; Noshadi, Amin; Kaynak, Akif

    2018-04-01

    This paper presents a 3-dimensional (3D) printed soft parallel contactless actuator for the first time. The actuator involves an electro-responsive parallel mechanism made of two segments namely active chain and passive chain both 3D printed. The active chain is attached to the ground from one end and constitutes two actuator links made of responsive hydrogel. The passive chain, on the other hand, is attached to the active chain from one end and consists of two rigid links made of polymer. The actuator links are printed using an extrusion-based 3D-Bioplotter with polyelectrolyte hydrogel as printer ink. The rigid links are also printed by a 3D fused deposition modelling (FDM) printer with acrylonitrile butadiene styrene (ABS) as print material. The kinematics model of the soft parallel actuator is derived via transformation matrices notations to simulate and determine the workspace of the actuator. The printed soft parallel actuator is then immersed into NaOH solution with specific voltage applied to it via two contactless electrodes. The experimental data is then collected and used to develop a parametric model to estimate the end-effector position and regulate kinematics model in response to specific input voltage over time. It is observed that the electroactive actuator demonstrates expected behaviour according to the simulation of its kinematics model. The use of 3D printing for the fabrication of parallel soft actuators opens a new chapter in manufacturing sophisticated soft actuators with high dexterity and mechanical robustness for biomedical applications such as cell manipulation and drug release.

  19. Dynamic iterative beam hardening correction (DIBHC) in myocardial perfusion imaging using contrast-enhanced computed tomography.

    PubMed

    Stenner, Philip; Schmidt, Bernhard; Allmendinger, Thomas; Flohr, Thomas; Kachelrie, Marc

    2010-06-01

    In cardiac perfusion examinations with computed tomography (CT) large concentrations of iodine in the ventricle and in the descending aorta cause beam hardening artifacts that can lead to incorrect perfusion parameters. The aim of this study is to reduce these artifacts by performing an iterative correction and by accounting for the 3 materials soft tissue, bone, and iodine. Beam hardening corrections are either implemented as simple precorrections which cannot account for higher order beam hardening effects, or as iterative approaches that are based on segmenting the original image into material distribution images. Conventional segmentation algorithms fail to clearly distinguish between iodine and bone. Our new algorithm, DIBHC, calculates the time-dependent iodine distribution by analyzing the voxel changes of a cardiac perfusion examination (typically N approximately 15 electrocardiogram-correlated scans distributed over a total scan time up to T approximately 30 s). These voxel dynamics are due to changes in contrast agent. This prior information allows to precisely distinguish between bone and iodine and is key to DIBHC where each iteration consists of a multimaterial (soft tissue, bone, iodine) polychromatic forward projection, a raw data comparison and a filtered backprojection. Simulations with a semi-anthropomorphic dynamic phantom and clinical scans using a dual source CT scanner with 2 x 128 slices, a tube voltage of 100 kV, a tube current of 180 mAs, and a rotation time of 0.28 seconds have been carried out. The uncorrected images suffer from beam hardening artifacts that appear as dark bands connecting large concentrations of iodine in the ventricle, aorta, and bony structures. The CT-values of the affected tissue are usually underestimated by roughly 20 HU although deviations of up to 61 HU have been observed. For a quantitative evaluation circular regions of interest have been analyzed. After application of DIBHC the mean values obtained deviate by only 1 HU for the simulations and the corrected values show an increase of up to 61 HU for the measurements. One iteration of DIBHC greatly reduces the beam hardening artifacts induced by the contrast agent dynamics (and those due to bone) now allowing for an improved assessment of contrast agent uptake in the myocardium which is essential for determining myocardial perfusion.

  20. Soft-output decoding algorithms in iterative decoding of turbo codes

    NASA Technical Reports Server (NTRS)

    Benedetto, S.; Montorsi, G.; Divsalar, D.; Pollara, F.

    1996-01-01

    In this article, we present two versions of a simplified maximum a posteriori decoding algorithm. The algorithms work in a sliding window form, like the Viterbi algorithm, and can thus be used to decode continuously transmitted sequences obtained by parallel concatenated codes, without requiring code trellis termination. A heuristic explanation is also given of how to embed the maximum a posteriori algorithms into the iterative decoding of parallel concatenated codes (turbo codes). The performances of the two algorithms are compared on the basis of a powerful rate 1/3 parallel concatenated code. Basic circuits to implement the simplified a posteriori decoding algorithm using lookup tables, and two further approximations (linear and threshold), with a very small penalty, to eliminate the need for lookup tables are proposed.

  1. The benefits of adaptive parametrization in multi-objective Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ghisu, Tiziano; Parks, Geoffrey T.; Jaeggi, Daniel M.; Jarrett, Jerome P.; Clarkson, P. John

    2010-10-01

    In real-world optimization problems, large design spaces and conflicting objectives are often combined with a large number of constraints, resulting in a highly multi-modal, challenging, fragmented landscape. The local search at the heart of Tabu Search, while being one of its strengths in highly constrained optimization problems, requires a large number of evaluations per optimization step. In this work, a modification of the pattern search algorithm is proposed: this modification, based on a Principal Components' Analysis of the approximation set, allows both a re-alignment of the search directions, thereby creating a more effective parametrization, and also an informed reduction of the size of the design space itself. These changes make the optimization process more computationally efficient and more effective - higher quality solutions are identified in fewer iterations. These advantages are demonstrated on a number of standard analytical test functions (from the ZDT and DTLZ families) and on a real-world problem (the optimization of an axial compressor preliminary design).

  2. Parametric Loop Division for 3D Localization in Wireless Sensor Networks

    PubMed Central

    Ahmad, Tanveer

    2017-01-01

    Localization in Wireless Sensor Networks (WSNs) has been an active topic for more than two decades. A variety of algorithms were proposed to improve the localization accuracy. However, they are either limited to two-dimensional (2D) space, or require specific sensor deployment for proper operations. In this paper, we proposed a three-dimensional (3D) localization scheme for WSNs based on the well-known parametric Loop division (PLD) algorithm. The proposed scheme localizes a sensor node in a region bounded by a network of anchor nodes. By iteratively shrinking that region towards its center point, the proposed scheme provides better localization accuracy as compared to existing schemes. Furthermore, it is cost-effective and independent of environmental irregularity. We provide an analytical framework for the proposed scheme and find its lower bound accuracy. Simulation results shows that the proposed algorithm provides an average localization accuracy of 0.89 m with a standard deviation of 1.2 m. PMID:28737714

  3. A new parametric method to smooth time-series data of metabolites in metabolic networks.

    PubMed

    Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide

    2016-12-01

    Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Segmentation methodology for automated classification and differentiation of soft tissues in multiband images of high-resolution ultrasonic transmission tomography.

    PubMed

    Jeong, Jeong-Won; Shin, Dae C; Do, Synho; Marmarelis, Vasilis Z

    2006-08-01

    This paper presents a novel segmentation methodology for automated classification and differentiation of soft tissues using multiband data obtained with the newly developed system of high-resolution ultrasonic transmission tomography (HUTT) for imaging biological organs. This methodology extends and combines two existing approaches: the L-level set active contour (AC) segmentation approach and the agglomerative hierarchical kappa-means approach for unsupervised clustering (UC). To prevent the trapping of the current iterative minimization AC algorithm in a local minimum, we introduce a multiresolution approach that applies the level set functions at successively increasing resolutions of the image data. The resulting AC clusters are subsequently rearranged by the UC algorithm that seeks the optimal set of clusters yielding the minimum within-cluster distances in the feature space. The presented results from Monte Carlo simulations and experimental animal-tissue data demonstrate that the proposed methodology outperforms other existing methods without depending on heuristic parameters and provides a reliable means for soft tissue differentiation in HUTT images.

  5. Parametrization of a nonlocal chiral quark model in the instantaneous three-flavor case. Basic formulas and tables

    NASA Astrophysics Data System (ADS)

    Grigorian, H.

    2007-05-01

    We describe the basic formulation of the parametrization scheme for the instantaneous nonlocal chiral quark model in the three-flavor case. We choose to discuss the Gaussian, Lorentzian-type, Woods-Saxon, and sharp cutoff (NJL) functional forms of the momentum dependence for the form factor of the separable interaction. The four parameters, light and strange quark masses and coupling strength (G S) and range of the interaction (Λ), have been fixed by the same phenomenological inputs: pion and kaon masses and the pion decay constant and light quark mass in vacuum. The Woods-Saxon and Lorentzian-type form factors are suitable for an interpolation between sharp cutoff and soft momentum dependence. Results are tabulated for applications in models of hadron structure and quark matter at finite temperatures and chemical potentials, where separable models have been proven successfully.

  6. WE-AB-207A-08: BEST IN PHYSICS (IMAGING): Advanced Scatter Correction and Iterative Reconstruction for Improved Cone-Beam CT Imaging On the TrueBeam Radiotherapy Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, A; Paysan, P; Brehm, M

    2016-06-15

    Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less

  7. Redesign and Rehost of the BIG STICK Nuclear Wargame Simulation

    DTIC Science & Technology

    1988-12-01

    described by Pressman [16]. The 4GT soft- ware development approach consists of four iterative phases: the requirements gathering phase, the design strategy...2. BIG STICK Instructions and Planning Guidance. Air Command and Staff College, Air University, Maxwell AFB AL, 1987. Unpublished Manual. 3. Barry W...Software Engineering Notes, 7:29-32, April 1982. 81 17. Roger S. Pressman . Software Engineering: A Practitioner’s Approach. Mc-Craw-llill Book

  8. Soft Clustering Criterion Functions for Partitional Document Clustering

    DTIC Science & Technology

    2004-05-26

    in the clus- ter that it already belongs to. The refinement phase ends, as soon as we perform an iteration in which no documents moved between...for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 26 MAY 2004 2... it with the one obtained by the hard criterion functions. We present a comprehensive experimental evaluation involving twelve differ- ent datasets

  9. Soft context clustering for F0 modeling in HMM-based speech synthesis

    NASA Astrophysics Data System (ADS)

    Khorram, Soheil; Sameti, Hossein; King, Simon

    2015-12-01

    This paper proposes the use of a new binary decision tree, which we call a soft decision tree, to improve generalization performance compared to the conventional `hard' decision tree method that is used to cluster context-dependent model parameters in statistical parametric speech synthesis. We apply the method to improve the modeling of fundamental frequency, which is an important factor in synthesizing natural-sounding high-quality speech. Conventionally, hard decision tree-clustered hidden Markov models (HMMs) are used, in which each model parameter is assigned to a single leaf node. However, this `divide-and-conquer' approach leads to data sparsity, with the consequence that it suffers from poor generalization, meaning that it is unable to accurately predict parameters for models of unseen contexts: the hard decision tree is a weak function approximator. To alleviate this, we propose the soft decision tree, which is a binary decision tree with soft decisions at the internal nodes. In this soft clustering method, internal nodes select both their children with certain membership degrees; therefore, each node can be viewed as a fuzzy set with a context-dependent membership function. The soft decision tree improves model generalization and provides a superior function approximator because it is able to assign each context to several overlapped leaves. In order to use such a soft decision tree to predict the parameters of the HMM output probability distribution, we derive the smoothest (maximum entropy) distribution which captures all partial first-order moments and a global second-order moment of the training samples. Employing such a soft decision tree architecture with maximum entropy distributions, a novel speech synthesis system is trained using maximum likelihood (ML) parameter re-estimation and synthesis is achieved via maximum output probability parameter generation. In addition, a soft decision tree construction algorithm optimizing a log-likelihood measure is developed. Both subjective and objective evaluations were conducted and indicate a considerable improvement over the conventional method.

  10. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    PubMed Central

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-01-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate Ki as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting Ki images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit Ki bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source Software for Tomographic Image Reconstruction (STIR) platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced Ki target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D vs. the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10–20 sub-iterations. Moreover, systematic reduction in Ki % bias and improved TBR were observed for gPatlak vs. sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior Ki CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging. PMID:27383991

  11. Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2018-01-15

    RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in parallel tiled code implementing Nussinov's RNA folding. Experimental results, received on modern Intel multi-core processors, demonstrate that this code outperforms known closely related implementations when the length of RNA strands is bigger than 2500.

  12. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were observed for gPatlak versus sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior K i CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging.

  13. An Algebraic Implicitization and Specialization of Minimum KL-Divergence Models

    NASA Astrophysics Data System (ADS)

    Dukkipati, Ambedkar; Manathara, Joel George

    In this paper we study representation of KL-divergence minimization, in the cases where integer sufficient statistics exists, using tools from polynomial algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. In particular, we also study the case of Kullback-Csisźar iteration scheme. We present implicit descriptions of these models and show that implicitization preserves specialization of prior distribution. This result leads us to a Gröbner bases method to compute an implicit representation of minimum KL-divergence models.

  14. Composite panel development at JPL

    NASA Technical Reports Server (NTRS)

    Mcelroy, Paul; Helms, Rich

    1988-01-01

    Parametric computer studies can be use in a cost effective manner to determine optimized composite mirror panel designs. An InterDisciplinary computer Model (IDM) was created to aid in the development of high precision reflector panels for LDR. The materials properties, thermal responses, structural geometries, and radio/optical precision are synergistically analyzed for specific panel designs. Promising panels designs are fabricated and tested so that comparison with panel test results can be used to verify performance prediction models and accommodate design refinement. The iterative approach of computer design and model refinement with performance testing and materials optimization has shown good results for LDR panels.

  15. New adaptive statistical iterative reconstruction ASiR-V: Assessment of noise performance in comparison to ASiR.

    PubMed

    De Marco, Paolo; Origgi, Daniela

    2018-03-01

    To assess the noise characteristics of the new adaptive statistical iterative reconstruction (ASiR-V) in comparison to ASiR. A water phantom was acquired with common clinical scanning parameters, at five different levels of CTDI vol . Images were reconstructed with different kernels (STD, SOFT, and BONE), different IR levels (40%, 60%, and 100%) and different slice thickness (ST) (0.625 and 2.5 mm), both for ASiR-V and ASiR. Noise properties were investigated and noise power spectrum (NPS) was evaluated. ASiR-V significantly reduced noise relative to FBP: noise reduction was in the range 23%-60% for a 0.625 mm ST and 12%-64% for the 2.5 mm ST. Above 2 mGy, noise reduction for ASiR-V had no dependence on dose. Noise reduction for ASIR-V has dependence on ST, being greater for STD and SOFT kernels at 2.5 mm. For the STD kernel ASiR-V has greater noise reduction for both ST, if compared to ASiR. For the SOFT kernel, results varies according to dose and ST, while for BONE kernel ASIR-V shows less noise reduction. NPS for CT Revolution has dose dependent behavior at lower doses. NPS for ASIR-V and ASiR is similar, showing a shift toward lower frequencies as the IR level increases for STD and SOFT kernels. The NPS is different between ASiR-V and ASIR with BONE kernel. NPS for ASiR-V appears to be ST dependent, having a shift toward lower frequencies for 2.5 mm ST. ASiR-V showed greater noise reduction than ASiR for STD and SOFT kernels, while keeping the same NPS. For the BONE kernel, ASiR-V presents a completely different behavior, with less noise reduction and modified NPS. Noise properties of the ASiR-V are dependent on reconstruction slice thickness. The noise properties of ASiR-V suggest the need for further measurements and efforts to establish new CT protocols to optimize clinical imaging. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  16. Thermalization of mini-jets in a quark-gluon plasma

    NASA Astrophysics Data System (ADS)

    Iancu, Edmond; Wu, Bin

    2015-10-01

    We complete the physical picture for the evolution of a high-energy jet propagating through a weakly-coupled quark-gluon plasma by investigating the thermalization of the soft components of the jet. We argue that the following scenario should hold: the leading particle emits a significant number of mini-jets which promptly evolve via quasi-democratic branchings and thus degrade into a myriad of soft gluons, with energies of the order of the medium temperature T. Via elastic collisions with the medium constituents, these soft gluons relax to local thermal equilibrium with the plasma over a time scale which is considerably shorter than the typical lifetime of the mini-jet. The thermalized gluons form a tail which lags behind the hard components of the jet. We support this scenario, first, via parametric arguments and, next, by studying a simplified kinetic equation, which describes the jet dynamics in longitudinal phase-space. We solve the kinetic equation using both (semi-)analytical and numerical methods. In particular, we obtain the first exact, analytic, solutions to the ultrarelativistic Fokker-Planck equation in one-dimensional phase-space. Our results confirm the physical picture aforementioned and demonstrate the quenching of the jet via multiple branching followed by the thermalization of the soft gluons in the cascades.

  17. Gaseous electron multiplier-based soft x-ray plasma diagnostics development: Preliminary tests at ASDEX Upgrade.

    PubMed

    Chernyshova, M; Malinowski, K; Czarski, T; Wojeński, A; Vezinet, D; Poźniak, K T; Kasprowicz, G; Mazon, D; Jardin, A; Herrmann, A; Kowalska-Strzęciwilk, E; Krawczyk, R; Kolasiński, P; Zabołotny, W; Zienkiewicz, P

    2016-11-01

    A Gaseous Electron Multiplier (GEM)-based detector is being developed for soft X-ray diagnostics on tokamaks. Its main goal is to facilitate transport studies of impurities like tungsten. Such studies are very relevant to ITER, where the excessive accumulation of impurities in the plasma core should be avoided. This contribution provides details of the preliminary tests at ASDEX Upgrade (AUG) with a focus on the most important aspects for detector operation in harsh radiation environment. It was shown that both spatially and spectrally resolved data could be collected, in a reasonable agreement with other AUG diagnostics. Contributions to the GEM signal include also hard X-rays, gammas, and neutrons. First simulations of the effect of high-energy photons have helped understanding these contributions.

  18. Gaseous electron multiplier-based soft x-ray plasma diagnostics development: Preliminary tests at ASDEX Upgrade

    NASA Astrophysics Data System (ADS)

    Chernyshova, M.; Malinowski, K.; Czarski, T.; Wojeński, A.; Vezinet, D.; Poźniak, K. T.; Kasprowicz, G.; Mazon, D.; Jardin, A.; Herrmann, A.; Kowalska-Strzeciwilk, E.; Krawczyk, R.; Kolasiński, P.; Zabołotny, W.; Zienkiewicz, P.

    2016-11-01

    A Gaseous Electron Multiplier (GEM)-based detector is being developed for soft X-ray diagnostics on tokamaks. Its main goal is to facilitate transport studies of impurities like tungsten. Such studies are very relevant to ITER, where the excessive accumulation of impurities in the plasma core should be avoided. This contribution provides details of the preliminary tests at ASDEX Upgrade (AUG) with a focus on the most important aspects for detector operation in harsh radiation environment. It was shown that both spatially and spectrally resolved data could be collected, in a reasonable agreement with other AUG diagnostics. Contributions to the GEM signal include also hard X-rays, gammas, and neutrons. First simulations of the effect of high-energy photons have helped understanding these contributions.

  19. Parametric studies of metabolic cooperativity in Escherichia coli colonies: Strain and geometric confinement effects.

    PubMed

    Peterson, Joseph R; Cole, John A; Luthey-Schulten, Zaida

    2017-01-01

    Characterizing the complex spatial and temporal interactions among cells in a biological system (i.e. bacterial colony, microbiome, tissue, etc.) remains a challenge. Metabolic cooperativity in these systems can arise due to the subtle interplay between microenvironmental conditions and the cells' regulatory machinery, often involving cascades of intra- and extracellular signalling molecules. In the simplest of cases, as demonstrated in a recent study of the model organism Escherichia coli, metabolic cross-feeding can arise in monoclonal colonies of bacteria driven merely by spatial heterogeneity in the availability of growth substrates; namely, acetate, glucose and oxygen. Another recent study demonstrated that even closely related E. coli strains evolved different glucose utilization and acetate production capabilities, hinting at the possibility of subtle differences in metabolic cooperativity and the resulting growth behavior of these organisms. Taking a first step towards understanding the complex spatio-temporal interactions within microbial populations, we performed a parametric study of E. coli growth on an agar substrate and probed the dependence of colony behavior on: 1) strain-specific metabolic characteristics, and 2) the geometry of the underlying substrate. To do so, we employed a recently developed multiscale technique named 3D dynamic flux balance analysis which couples reaction-diffusion simulations with iterative steady-state metabolic modeling. Key measures examined include colony growth rate and shape (height vs. width), metabolite production/consumption and concentration profiles, and the emergence of metabolic cooperativity and the fractions of cell phenotypes. Five closely related strains of E. coli, which exhibit large variation in glucose consumption and organic acid production potential, were studied. The onset of metabolic cooperativity was found to vary substantially between these five strains by up to 10 hours and the relative fraction of acetate utilizing cells within the colonies varied by a factor of two. Additionally, growth with six different geometries designed to mimic those that might be found in a laboratory, a microfluidic device, and inside a living organism were considered. Geometries were found to have complex, often nonlinear effects on colony growth and cross-feeding with "hard" features resulting in larger effect than "soft" features. These results demonstrate that strain-specific features and spatial constraints imposed by the growth substrate can have significant effects even for microbial populations as simple as isogenic E. coli colonies.

  20. Toward Modular Soft Robotics: Proprioceptive Curvature Sensing and Sliding-Mode Control of Soft Bidirectional Bending Modules.

    PubMed

    Luo, Ming; Skorina, Erik H; Tao, Weijia; Chen, Fuchen; Ozel, Selim; Sun, Yinan; Onal, Cagdas D

    2017-06-01

    Real-world environments are complex, unstructured, and often fragile. Soft robotics offers a solution for robots to safely interact with the environment and human coworkers, but suffers from a host of challenges in sensing and control of continuously deformable bodies. To overcome these challenges, this article considers a modular soft robotic architecture that offers proprioceptive sensing of pressure-operated bending actuation modules. We present integrated custom magnetic curvature sensors embedded in the neutral axis of bidirectional bending actuators. We describe our recent advances in the design and fabrication of these modules to improve the reliability of proprioceptive curvature feedback over our prior work. In particular, we study the effect of dimensional parameters on improving the linearity of curvature measurements. In addition, we present a sliding-mode controller formulation that drives the binary solenoid valve states directly, giving the control system the ability to hold the actuator steady without continuous pressurization and depressurization. In comparison to other methods, this control approach does not rely on pulse width modulation and hence offers superior dynamic performance (i.e., faster response rates). Our experimental results indicate that the proposed soft robotic modules offer a large range of bending angles with monotonic and more linear embedded curvature measurements, and that the direct sliding-mode control system exhibits improved bandwidth and a notable reduction in binary valve actuation operations compared to our earlier iterative sliding-mode controller.

  1. Fractal nematic colloids

    NASA Astrophysics Data System (ADS)

    Hashemi, S. M.; Jagodič, U.; Mozaffari, M. R.; Ejtehadi, M. R.; Muševič, I.; Ravnik, M.

    2017-01-01

    Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter.

  2. A method for the dynamic and thermal stress analysis of space shuttle surface insulation

    NASA Technical Reports Server (NTRS)

    Ojalvo, I. U.; Levy, A.; Austin, F.

    1975-01-01

    The thermal protection system of the space shuttle consists of thousands of separate insulation tiles bonded to the orbiter's surface through a soft strain-isolation layer. The individual tiles are relatively thick and possess nonuniform properties. Therefore, each is idealized by finite-element assemblages containing up to 2500 degrees of freedom. Since the tiles affixed to a given structural panel will, in general, interact with one another, application of the standard direct-stiffness method would require equation systems involving excessive numbers of unknowns. This paper presents a method which overcomes this problem through an efficient iterative procedure which requires treatment of only a single tile at any given time. Results of associated static, dynamic, and thermal stress analyses and sufficient conditions for convergence of the iterative solution method are given.

  3. Direct reconstruction of parametric images for brain PET with event-by-event motion correction: evaluation in two tracers across count levels

    NASA Astrophysics Data System (ADS)

    Germino, Mary; Gallezot, Jean-Dominque; Yan, Jianhua; Carson, Richard E.

    2017-07-01

    Parametric images for dynamic positron emission tomography (PET) are typically generated by an indirect method, i.e. reconstructing a time series of emission images, then fitting a kinetic model to each voxel time activity curve. Alternatively, ‘direct reconstruction’, incorporates the kinetic model into the reconstruction algorithm itself, directly producing parametric images from projection data. Direct reconstruction has been shown to achieve parametric images with lower standard error than the indirect method. Here, we present direct reconstruction for brain PET using event-by-event motion correction of list-mode data, applied to two tracers. Event-by-event motion correction was implemented for direct reconstruction in the Parametric Motion-compensation OSEM List-mode Algorithm for Resolution-recovery reconstruction. The direct implementation was tested on simulated and human datasets with tracers [11C]AFM (serotonin transporter) and [11C]UCB-J (synaptic density), which follow the 1-tissue compartment model. Rigid head motion was tracked with the Vicra system. Parametric images of K 1 and distribution volume (V T  =  K 1/k 2) were compared to those generated by the indirect method by regional coefficient of variation (CoV). Performance across count levels was assessed using sub-sampled datasets. For simulated and real datasets at high counts, the two methods estimated K 1 and V T with comparable accuracy. At lower count levels, the direct method was substantially more robust to outliers than the indirect method. Compared to the indirect method, direct reconstruction reduced regional K 1 CoV by 35-48% (simulated dataset), 39-43% ([11C]AFM dataset) and 30-36% ([11C]UCB-J dataset) across count levels (averaged over regions at matched iteration); V T CoV was reduced by 51-58%, 54-60% and 30-46%, respectively. Motion correction played an important role in the dataset with larger motion: correction increased regional V T by 51% on average in the [11C]UCB-J dataset. Direct reconstruction of dynamic brain PET with event-by-event motion correction is achievable and dramatically more robust to noise in V T images than the indirect method.

  4. Inductive electronegativity scale. Iterative calculation of inductive partial charges.

    PubMed

    Cherkasov, Artem

    2003-01-01

    A number of novel QSAR descriptors have been introduced on the basis of the previously elaborated models for steric and inductive effects. The developed "inductive" parameters include absolute and effective electronegativity, atomic partial charges, and local and global chemical hardness and softness. Being based on traditional inductive and steric substituent constants these 3D descriptors provide a valuable insight into intramolecular steric and electronic interactions and can find broad application in structure-activity studies. Possible interpretation of physical meaning of the inductive descriptors has been suggested by considering a neutral molecule as an electrical capacitor formed by charged atomic spheres. This approximation relates inductive chemical softness and hardness of bound atom(s) with the total area of the facings of electrical capacitor formed by the atom(s) and the rest of the molecule. The derived full electronegativity equalization scheme allows iterative calculation of inductive partial charges on the basis of atomic electronegativities, covalent radii, and intramolecular distances. A range of inductive descriptors has been computed for a variety of organic compounds. The calculated inductive charges in the studied molecules have been validated by experimental C-1s Electron Core Binding Energies and molecular dipole moments. Several semiempirical chemical rules, such as equalized electronegativity's arithmetic mean, principle of maximum hardness, and principle of hardness borrowing could be explicitly illustrated in the framework of the developed approach.

  5. Nonuniform update for sparse target recovery in fluorescence molecular tomography accelerated by ordered subsets.

    PubMed

    Zhu, Dianwen; Li, Changqing

    2014-12-01

    Fluorescence molecular tomography (FMT) is a promising imaging modality and has been actively studied in the past two decades since it can locate the specific tumor position three-dimensionally in small animals. However, it remains a challenging task to obtain fast, robust and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden, the noisy measurement and the ill-posed nature of the inverse problem. In this paper we propose a nonuniform preconditioning method in combination with L (1) regularization and ordered subsets technique (NUMOS) to take care of the different updating needs at different pixels, to enhance sparsity and suppress noise, and to further boost convergence of approximate solutions for fluorescence molecular tomography. Using both simulated data and phantom experiment, we found that the proposed nonuniform updating method outperforms its popular uniform counterpart by obtaining a more localized, less noisy, more accurate image. The computational cost was greatly reduced as well. The ordered subset (OS) technique provided additional 5 times and 3 times speed enhancements for simulation and phantom experiments, respectively, without degrading image qualities. When compared with the popular L (1) algorithms such as iterative soft-thresholding algorithm (ISTA) and Fast iterative soft-thresholding algorithm (FISTA) algorithms, NUMOS also outperforms them by obtaining a better image in much shorter period of time.

  6. Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Kværna, Tormod; Harris, David B.; Dodge, Douglas A.

    2016-04-01

    Aftershock sequences following very large earthquakes present enormous challenges to near-realtime generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase association algorithms and a significant deterioration in the quality of underlying fully automatic event bulletins. Current processing pipelines were designed a generation ago and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams which are then scanned by a phase association algorithm to form event hypotheses. We consider the scenario where a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located using a separate specially targeted semi-automatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid search algorithm which may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove over half of the original detections which could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Further reductions in the number of detections in the parametric data streams are likely using correlation and subspace detectors and/or empirical matched field processing.

  7. Sparse-view proton computed tomography using modulated proton beams.

    PubMed

    Lee, Jiseoc; Kim, Changhwan; Min, Byungjun; Kwak, Jungwon; Park, Seyjoon; Lee, Se Byeong; Park, Sungyong; Cho, Seungryong

    2015-02-01

    Proton imaging that uses a modulated proton beam and an intensity detector allows a relatively fast image acquisition compared to the imaging approach based on a trajectory tracking detector. In addition, it requires a relatively simple implementation in a conventional proton therapy equipment. The model of geometric straight ray assumed in conventional computed tomography (CT) image reconstruction is however challenged by multiple-Coulomb scattering and energy straggling in the proton imaging. Radiation dose to the patient is another important issue that has to be taken care of for practical applications. In this work, the authors have investigated iterative image reconstructions after a deconvolution of the sparsely view-sampled data to address these issues in proton CT. Proton projection images were acquired using the modulated proton beams and the EBT2 film as an intensity detector. Four electron-density cylinders representing normal soft tissues and bone were used as imaged object and scanned at 40 views that are equally separated over 360°. Digitized film images were converted to water-equivalent thickness by use of an empirically derived conversion curve. For improving the image quality, a deconvolution-based image deblurring with an empirically acquired point spread function was employed. They have implemented iterative image reconstruction algorithms such as adaptive steepest descent-projection onto convex sets (ASD-POCS), superiorization method-projection onto convex sets (SM-POCS), superiorization method-expectation maximization (SM-EM), and expectation maximization-total variation minimization (EM-TV). Performance of the four image reconstruction algorithms was analyzed and compared quantitatively via contrast-to-noise ratio (CNR) and root-mean-square-error (RMSE). Objects of higher electron density have been reconstructed more accurately than those of lower density objects. The bone, for example, has been reconstructed within 1% error. EM-based algorithms produced an increased image noise and RMSE as the iteration reaches about 20, while the POCS-based algorithms showed a monotonic convergence with iterations. The ASD-POCS algorithm outperformed the others in terms of CNR, RMSE, and the accuracy of the reconstructed relative stopping power in the region of lung and soft tissues. The four iterative algorithms, i.e., ASD-POCS, SM-POCS, SM-EM, and EM-TV, have been developed and applied for proton CT image reconstruction. Although it still seems that the images need to be improved for practical applications to the treatment planning, proton CT imaging by use of the modulated beams in sparse-view sampling has demonstrated its feasibility.

  8. Mimicking biological stress-strain behaviour with synthetic elastomers

    NASA Astrophysics Data System (ADS)

    Vatankhah-Varnosfaderani, Mohammad; Daniel, William F. M.; Everhart, Matthew H.; Pandya, Ashish A.; Liang, Heyi; Matyjaszewski, Krzysztof; Dobrynin, Andrey V.; Sheiko, Sergei S.

    2017-09-01

    Despite the versatility of synthetic chemistry, certain combinations of mechanical softness, strength, and toughness can be difficult to achieve in a single material. These combinations are, however, commonplace in biological tissues, and are therefore needed for applications such as medical implants, tissue engineering, soft robotics, and wearable electronics. Present materials synthesis strategies are predominantly Edisonian, involving the empirical mixing of assorted monomers, crosslinking schemes, and occluded swelling agents, but this approach yields limited property control. Here we present a general strategy for mimicking the mechanical behaviour of biological materials by precisely encoding their stress-strain curves in solvent-free brush- and comb-like polymer networks (elastomers). The code consists of three independent architectural parameters—network strand length, side-chain length and grafting density. Using prototypical poly(dimethylsiloxane) elastomers, we illustrate how this parametric triplet enables the replication of the strain-stiffening characteristics of jellyfish, lung, and arterial tissues.

  9. Differential dynamic microscopy microrheology of soft materials: A tracking-free determination of the frequency-dependent loss and storage moduli

    NASA Astrophysics Data System (ADS)

    Edera, Paolo; Bergamini, Davide; Trappe, Véronique; Giavazzi, Fabio; Cerbino, Roberto

    2017-12-01

    Particle-tracking microrheology (PT-μ r ) exploits the thermal motion of embedded particles to probe the local mechanical properties of soft materials. Despite its appealing conceptual simplicity, PT-μ r requires calibration procedures and operating assumptions that constitute a practical barrier to its wider application. Here we demonstrate differential dynamic microscopy microrheology (DDM-μ r ), a tracking-free approach based on the multiscale, temporal correlation study of the image intensity fluctuations that are observed in microscopy experiments as a consequence of the translational and rotational motion of the tracers. We show that the mechanical moduli of an arbitrary sample are determined correctly over a wide frequency range provided that the standard DDM analysis is reinforced with an iterative, self-consistent procedure that fully exploits the multiscale information made available by DDM. Our approach to DDM-μ r does not require any prior calibration, is in agreement with both traditional rheology and diffusing wave spectroscopy microrheology, and works in conditions where PT-μ r fails, providing thus an operationally simple, calibration-free probe of soft materials.

  10. Learning the inverse kinetics of an octopus-like manipulator in three-dimensional space.

    PubMed

    Giorelli, M; Renda, F; Calisti, M; Arienti, A; Ferri, G; Laschi, C

    2015-05-13

    This work addresses the inverse kinematics problem of a bioinspired octopus-like manipulator moving in three-dimensional space. The bioinspired manipulator has a conical soft structure that confers the ability of twirling around objects as a real octopus arm does. Despite the simple design, the soft conical shape manipulator driven by cables is described by nonlinear differential equations, which are difficult to solve analytically. Since exact solutions of the equations are not available, the Jacobian matrix cannot be calculated analytically and the classical iterative methods cannot be used. To overcome the intrinsic problems of methods based on the Jacobian matrix, this paper proposes a neural network learning the inverse kinematics of a soft octopus-like manipulator driven by cables. After the learning phase, a feed-forward neural network is able to represent the relation between manipulator tip positions and forces applied to the cables. Experimental results show that a desired tip position can be achieved in a short time, since heavy computations are avoided, with a degree of accuracy of 8% relative average error with respect to the total arm length.

  11. Stress concentration in periodically rough Hertzian contact: Hertz to soft-flat-punch transition

    PubMed Central

    Raphaël, E.; Léger, L.; Restagno, F.; Poulard, C.

    2016-01-01

    We report on the elastic contact between a spherical lens and a patterned substrate, composed of a hexagonal lattice of cylindrical pillars. The stress field and the size of the contact area are obtained by means of numerical methods: a superposition method of discrete pressure elements and an iterative bisection-like method. For small indentations, a transition from a Hertzian to a soft-flat-punch behaviour is observed when the surface fraction of the substrate that is covered by the pillars is increased. In particular, we present a master curve defined by two dimensionless parameters, which allows one to predict the stress at the centre of the contact region in terms of the surface fraction occupied by pillars. The transition between the limiting contact regimes, Hertzian and soft-flat-punch, is well described by a rational function. Additionally, a simple model to describe the Boussinesq–Cerruti-like contact between the lens and a single elastic pillar, which takes into account the pillar geometry and the elastic properties of the two bodies, is presented. PMID:27713659

  12. Vocal-fold collision mass as a differentiator between registers in the low-pitch range.

    PubMed

    Vilkman, E; Alku, P; Laukkanen, A M

    1995-03-01

    Register shift between the chest and falsetto register is generally studied in the higher-than-speaking pitch range. However, a similar difference can also be produced at speaking pitch level. The shift from breathy "falsetto" phonation to normal chest voice phonation was studied in normal female (pitch range 170-180 Hz) and male (pitch range 94-110 Hz) subjects. The phonations gliding from falsetto to normal chest voice were analyzed using iterative adaptive inverse filtering and electroglottography. Both trained and untrained, as well as female and male subjects, were able to produce an abrupt register shift from soft falsetto to soft chest register phonation. The differences between male and female speakers in the glottal flow waveforms were smaller than expected. The register shift is interpreted in terms of a "critical mass" concept of chest register phonation.

  13. Gaseous electron multiplier-based soft x-ray plasma diagnostics development: Preliminary tests at ASDEX Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chernyshova, M., E-mail: maryna.chernyshova@ipplm.pl; Malinowski, K.; Czarski, T.

    2016-11-15

    A Gaseous Electron Multiplier (GEM)-based detector is being developed for soft X-ray diagnostics on tokamaks. Its main goal is to facilitate transport studies of impurities like tungsten. Such studies are very relevant to ITER, where the excessive accumulation of impurities in the plasma core should be avoided. This contribution provides details of the preliminary tests at ASDEX Upgrade (AUG) with a focus on the most important aspects for detector operation in harsh radiation environment. It was shown that both spatially and spectrally resolved data could be collected, in a reasonable agreement with other AUG diagnostics. Contributions to the GEM signalmore » include also hard X-rays, gammas, and neutrons. First simulations of the effect of high-energy photons have helped understanding these contributions.« less

  14. What are the characteristics of breast cancers misclassified as benign by quantitative ultrasound shear wave elastography?

    PubMed

    Vinnicombe, S J; Whelehan, P; Thomson, K; McLean, D; Purdie, C A; Jordan, L B; Hubbard, S; Evans, A J

    2014-04-01

    Shear wave elastography (SWE) is a promising adjunct to greyscale ultrasound in differentiating benign from malignant breast masses. The purpose of this study was to characterise breast cancers which are not stiff on quantitative SWE, to elucidate potential sources of error in clinical application of SWE to evaluation of breast masses. Three hundred and two consecutive patients examined by SWE who underwent immediate surgery for breast cancer were included. Characteristics of 280 lesions with suspicious SWE values (mean stiffness >50 kPa) were compared with 22 lesions with benign SWE values (<50 kPa). Statistical significance of the differences was assessed using non-parametric goodness-of-fit tests. Pure ductal carcinoma in situ (DCIS) masses were more often soft on SWE than masses representing invasive breast cancer. Invasive cancers that were soft were more frequently: histological grade 1, tubular subtype, ≤10 mm invasive size and detected at screening mammography. No significant differences were found with respect to the presence of invasive lobular cancer, vascular invasion, hormone and HER-2 receptor status. Lymph node positivity was less common in soft cancers. Malignant breast masses classified as benign by quantitative SWE tend to have better prognostic features than those correctly classified as malignant. • Over 90 % of cancers assessable with ultrasound have a mean stiffness >50 kPa. • 'Soft' invasive cancers are frequently small (≤10 mm), low grade and screen-detected. • Pure DCIS masses are more often soft than invasive cancers (>40 %). • Large symptomatic masses are better evaluated with SWE than small clinically occult lesions. • When assessing small lesions, 'softness' should not raise the threshold for biopsy.

  15. Parametrically excited motion of a levitated rigid bar over high- Tc superconducting bulks

    NASA Astrophysics Data System (ADS)

    Shimizu, T.; Sugiura, T.; Ogawa, S.

    2006-10-01

    High-Tc superconducting levitation systems achieve, under no contact support, stable levitation without control. This feature can be applied to flywheels, magnetically levitated trains, and so on. But no contact support has small damping. So these mechanisms can show complicated phenomena of dynamics due to nonlinearity in their magnetic force. For application to large-scale machines, we need to analyze dynamics of a large levitated body supported at multiple points. This research deals with nonlinearly coupled oscillation of a homogeneous and symmetric rigid bar supported at its both ends by equal electromagnetic forces between superconductors and permanent magnets. In our past study, using a rigid bar, we found combination resonance. Combination resonance happens owing to the asymmetry of the system. But, even if support forces are symmetric, parametric resonance can happen. With a simple symmetric model, this research focuses on especially the parametric resonance, and evaluates nonlinear effect of the symmetric support forces by experiment and numerical analysis. Obtained results show that two modes, caused by coupling of horizontal translation and roll motion, can be excited nonlinearly when the superconductor is excited vertically in the neighborhood of twice the natural frequencies of those modes. We confirmed these resonances have nonlinear characteristics of soft-spring, hysteresis and so on.

  16. A new adaptive algorithm for automated feature extraction in exponentially damped signals for health monitoring of smart structures

    NASA Astrophysics Data System (ADS)

    Qarib, Hossein; Adeli, Hojjat

    2015-12-01

    In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.

  17. Study of Geometric Porosity on Static Stability and Drag Using Computational Fluid Dynamics for Rigid Parachute Shapes

    NASA Technical Reports Server (NTRS)

    Greathouse, James S.; Schwing, Alan M.

    2015-01-01

    This paper explores use of computational fluid dynamics to study the e?ect of geometric porosity on static stability and drag for NASA's Multi-Purpose Crew Vehicle main parachute. Both of these aerodynamic characteristics are of interest to in parachute design, and computational methods promise designers the ability to perform detailed parametric studies and other design iterations with a level of control previously unobtainable using ground or flight testing. The approach presented here uses a canopy structural analysis code to define the inflated parachute shapes on which structured computational grids are generated. These grids are used by the computational fluid dynamics code OVERFLOW and are modeled as rigid, impermeable bodies for this analysis. Comparisons to Apollo drop test data is shown as preliminary validation of the technique. Results include several parametric sweeps through design variables in order to better understand the trade between static stability and drag. Finally, designs that maximize static stability with a minimal loss in drag are suggested for further study in subscale ground and flight testing.

  18. Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.

    PubMed

    Chaudhuri, Shomesh E; Merfeld, Daniel M

    2013-03-01

    Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.

  19. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  20. Fractal nematic colloids

    PubMed Central

    Hashemi, S. M.; Jagodič, U.; Mozaffari, M. R.; Ejtehadi, M. R.; Muševič, I.; Ravnik, M.

    2017-01-01

    Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter. PMID:28117325

  1. An iterative hyperelastic parameters reconstruction for breast cancer assessment

    NASA Astrophysics Data System (ADS)

    Mehrabian, Hatef; Samani, Abbas

    2008-03-01

    In breast elastography, breast tissues usually undergo large compressions resulting in significant geometric and structural changes, and consequently nonlinear mechanical behavior. In this study, an elastography technique is presented where parameters characterizing tissue nonlinear behavior is reconstructed. Such parameters can be used for tumor tissue classification. To model the nonlinear behavior, tissues are treated as hyperelastic materials. The proposed technique uses a constrained iterative inversion method to reconstruct the tissue hyperelastic parameters. The reconstruction technique uses a nonlinear finite element (FE) model for solving the forward problem. In this research, we applied Yeoh and Polynomial models to model the tissue hyperelasticity. To mimic the breast geometry, we used a computational phantom, which comprises of a hemisphere connected to a cylinder. This phantom consists of two types of soft tissue to mimic adipose and fibroglandular tissues and a tumor. Simulation results show the feasibility of the proposed method in reconstructing the hyperelastic parameters of the tumor tissue.

  2. Iterative Code-Aided ML Phase Estimation and Phase Ambiguity Resolution

    NASA Astrophysics Data System (ADS)

    Wymeersch, Henk; Moeneclaey, Marc

    2005-12-01

    As many coded systems operate at very low signal-to-noise ratios, synchronization becomes a very difficult task. In many cases, conventional algorithms will either require long training sequences or result in large BER degradations. By exploiting code properties, these problems can be avoided. In this contribution, we present several iterative maximum-likelihood (ML) algorithms for joint carrier phase estimation and ambiguity resolution. These algorithms operate on coded signals by accepting soft information from the MAP decoder. Issues of convergence and initialization are addressed in detail. Simulation results are presented for turbo codes, and are compared to performance results of conventional algorithms. Performance comparisons are carried out in terms of BER performance and mean square estimation error (MSEE). We show that the proposed algorithm reduces the MSEE and, more importantly, the BER degradation. Additionally, phase ambiguity resolution can be performed without resorting to a pilot sequence, thus improving the spectral efficiency.

  3. GEM detectors development for radiation environment: neutron tests and simulations

    NASA Astrophysics Data System (ADS)

    Chernyshova, Maryna; Jednoróg, Sławomir; Malinowski, Karol; Czarski, Tomasz; Ziółkowski, Adam; Bieńkowska, Barbara; Prokopowicz, Rafał; Łaszyńska, Ewa; Kowalska-Strzeciwilk, Ewa; Poźniak, Krzysztof T.; Kasprowicz, Grzegorz; Zabołotny, Wojciech; Wojeński, Andrzej; Krawczyk, Rafał D.; Linczuk, Paweł; Potrykus, Paweł; Bajdel, Barcel

    2016-09-01

    One of the requests from the ongoing ITER-Like Wall Project is to have diagnostics for Soft X-Ray (SXR) monitoring in tokamak. Such diagnostics should be focused on tungsten emission measurements, as an increased attention is currently paid to tungsten due to a fact that it became a main candidate for the plasma facing material in ITER and future fusion reactor. In addition, such diagnostics should be able to withstand harsh radiation environment at tokamak during its operation. The presented work is related to the development of such diagnostics based on Gas Electron Multiplier (GEM) technology. More specifically, an influence of neutron radiation on performance of the GEM detectors is studied both experimentally and through computer simulations. The neutron induced radioactivity (after neutron source exposure) was found to be not pronounced comparing to an impact of other secondary neutron reaction products (during the exposure).

  4. Improved Iterative Decoding of Network-Channel Codes for Multiple-Access Relay Channel.

    PubMed

    Majumder, Saikat; Verma, Shrish

    2015-01-01

    Cooperative communication using relay nodes is one of the most effective means of exploiting space diversity for low cost nodes in wireless network. In cooperative communication, users, besides communicating their own information, also relay the information of other users. In this paper we investigate a scheme where cooperation is achieved using a common relay node which performs network coding to provide space diversity for two information nodes transmitting to a base station. We propose a scheme which uses Reed-Solomon error correcting code for encoding the information bit at the user nodes and convolutional code as network code, instead of XOR based network coding. Based on this encoder, we propose iterative soft decoding of joint network-channel code by treating it as a concatenated Reed-Solomon convolutional code. Simulation results show significant improvement in performance compared to existing scheme based on compound codes.

  5. Justification and refinement of Winkler–Fuss hypothesis

    NASA Astrophysics Data System (ADS)

    Kaplunov, J.; Prikazchikov, D.; Sultanova, L.

    2018-06-01

    Two-parametric asymptotic analysis of the equilibrium of an elastic half-space coated by a thin soft layer is developed. The initial scaling is motivated by the exact solution of the plane problem for a vertical harmonic load. It is established that the Winkler-Fuss hypothesis is valid only for a sufficiently high contrast in the stiffnesses of the layer and the half-space. As an alternative, a uniformly valid non-local approximation is proposed. Higher-order corrections to the Winkler-Fuss formulation, such as the Pasternak model, are also studied.

  6. Estimation of viscoelastic parameters in Prony series from shear wave propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Jae-Wook; Hong, Jung-Wuk, E-mail: j.hong@kaist.ac.kr, E-mail: jwhong@alum.mit.edu; Lee, Hyoung-Ki

    2016-06-21

    When acquiring accurate ultrasonic images, we must precisely estimate the mechanical properties of the soft tissue. This study investigates and estimates the viscoelastic properties of the tissue by analyzing shear waves generated through an acoustic radiation force. The shear waves are sourced from a localized pushing force acting for a certain duration, and the generated waves travel horizontally. The wave velocities depend on the mechanical properties of the tissue such as the shear modulus and viscoelastic properties; therefore, we can inversely calculate the properties of the tissue through parametric studies.

  7. Nonparametric estimation and testing of fixed effects panel data models

    PubMed Central

    Henderson, Daniel J.; Carroll, Raymond J.; Li, Qi

    2009-01-01

    In this paper we consider the problem of estimating nonparametric panel data models with fixed effects. We introduce an iterative nonparametric kernel estimator. We also extend the estimation method to the case of a semiparametric partially linear fixed effects model. To determine whether a parametric, semiparametric or nonparametric model is appropriate, we propose test statistics to test between the three alternatives in practice. We further propose a test statistic for testing the null hypothesis of random effects against fixed effects in a nonparametric panel data regression model. Simulations are used to examine the finite sample performance of the proposed estimators and the test statistics. PMID:19444335

  8. Model-based Iterative Reconstruction: Effect on Patient Radiation Dose and Image Quality in Pediatric Body CT

    PubMed Central

    Dillman, Jonathan R.; Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Keshavarzi, Nahid; Strouse, Peter J.

    2014-01-01

    Purpose To retrospectively compare image quality and radiation dose between a reduced-dose computed tomographic (CT) protocol that uses model-based iterative reconstruction (MBIR) and a standard-dose CT protocol that uses 30% adaptive statistical iterative reconstruction (ASIR) with filtered back projection. Materials and Methods Institutional review board approval was obtained. Clinical CT images of the chest, abdomen, and pelvis obtained with a reduced-dose protocol were identified. Images were reconstructed with two algorithms: MBIR and 100% ASIR. All subjects had undergone standard-dose CT within the prior year, and the images were reconstructed with 30% ASIR. Reduced- and standard-dose images were evaluated objectively and subjectively. Reduced-dose images were evaluated for lesion detectability. Spatial resolution was assessed in a phantom. Radiation dose was estimated by using volumetric CT dose index (CTDIvol) and calculated size-specific dose estimates (SSDE). A combination of descriptive statistics, analysis of variance, and t tests was used for statistical analysis. Results In the 25 patients who underwent the reduced-dose protocol, mean decrease in CTDIvol was 46% (range, 19%–65%) and mean decrease in SSDE was 44% (range, 19%–64%). Reduced-dose MBIR images had less noise (P > .004). Spatial resolution was superior for reduced-dose MBIR images. Reduced-dose MBIR images were equivalent to standard-dose images for lungs and soft tissues (P > .05) but were inferior for bones (P = .004). Reduced-dose 100% ASIR images were inferior for soft tissues (P < .002), lungs (P < .001), and bones (P < .001). By using the same reduced-dose acquisition, lesion detectability was better (38% [32 of 84 rated lesions]) or the same (62% [52 of 84 rated lesions]) with MBIR as compared with 100% ASIR. Conclusion CT performed with a reduced-dose protocol and MBIR is feasible in the pediatric population, and it maintains diagnostic quality. © RSNA, 2013 Online supplemental material is available for this article. PMID:24091359

  9. Electrokinetic energy conversion efficiency of viscoelastic fluids in a polyelectrolyte-grafted nanochannel.

    PubMed

    Jian, Yongjun; Li, Fengqin; Liu, Yongbo; Chang, Long; Liu, Quansheng; Yang, Liangui

    2017-08-01

    In order to conduct extensive investigation of energy harvesting capabilities of nanofluidic devices, we provide analytical solutions for streaming potential and electrokinetic energy conversion (EKEC) efficiency through taking the combined consequences of soft nanochannel, a rigid nanochannel whose surface is covered by charged polyelectrolyte layer, and viscoelastic rheology into account. The viscoelasticity of the fluid is considered by employing the Maxwell constitutive model when the forcing frequency of an oscillatory driving pressure flow matches with the inverse of the relaxation time scale of a typical viscoelastic fluid. We compare the streaming potential and EKEC efficiency with those of a rigid nanochannel, having zeta potential equal to the electrostatic potential at the solid-polyelectrolyte interface of the soft nanochannels. Within the present selected parameter ranges, it is shown that the different peaks of maximal streaming potential and EKEC efficiency for the rigid nanochannel are larger than those for the soft nanochannel when forcing frequencies of the driving pressure gradient are close to resonating frequencies. However, more enhanced streaming potential and EKEC efficiency for a soft nanochannel can be found in most of the regions away from these resonant frequencies. Moreover, the influence of several dimensionless parameters on EKEC efficiency is discussed in detail. Finally, within the given parametric regions, the maximum efficiency at some resonant frequency obtained in present analysis is about 25%. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Construction, classification and parametrization of complex Hadamard matrices

    NASA Astrophysics Data System (ADS)

    Szöllősi, Ferenc

    To improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved. Computing such fluxes efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration for eigenvalue problems, which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of cutting-edge computers. The first is a multigroup Krylov solver that converges more quickly than Gauss Seidel and parallelizes the code in energy such that Denovo can use hundreds of thousand of cores effectively. The second is Rayleigh quotient iteration (RQI), an old method applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way and should converge in fewer iterations than power iteration. RQI creates energy-block-dense equations that the new Krylov solver treats efficiently. However, RQI can have convergence problems because it creates poorly conditioned systems. This can be overcome with preconditioning. The third method is a multigrid-in-energy preconditioner. The preconditioner takes advantage of the new energy decomposition because the grids are in energy rather than space or angle. The preconditioner greatly reduces iteration count for many problem types and scales well in energy. It also allows RQI to be successful for problems it could not solve otherwise. The methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy preconditioner being particularly successful on their own. The largest benefit, though, comes from using these methods in concert.

  11. Bubble dynamics in viscoelastic soft tissue in high-intensity focal ultrasound thermal therapy.

    PubMed

    Zilonova, E; Solovchuk, M; Sheu, T W H

    2018-01-01

    The present study is aimed to investigate bubble dynamics in a soft tissue, to which HIFU's continuous harmonic pulse is applied by introducing a viscoelastic cavitation model. After a comparison of some existing cavitation models, we decided to employ Gilmore-Akulichev model. This chosen cavitation model should be coupled with the Zener viscoelastic model in order to be able to simulate soft tissue features such as elasticity and relaxation time. The proposed Gilmore-Akulichev-Zener model was investigated for exploring cavitation dynamics. The parametric study led us to the conclusion that the elasticity and viscosity both damp bubble oscillations, whereas the relaxation effect depends mainly on the period of the ultrasound wave. The similar influence of elasticity, viscosity and relaxation time on the temperature inside the bubble can be observed. Cavitation heat source terms (corresponding to viscous damping and pressure wave radiated by bubble collapse) were obtained based on the proposed model to examine the cavitation significance during the treatment process. Their maximum values both overdominate the acoustic ultrasound term in HIFU applications. Elasticity was revealed to damp a certain amount of deposited heat for both cavitation terms. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Closed-loop control of cellular functions using combinatory drugs guided by a stochastic search algorithm

    PubMed Central

    Wong, Pak Kin; Yu, Fuqu; Shahangian, Arash; Cheng, Genhong; Sun, Ren; Ho, Chih-Ming

    2008-01-01

    A mixture of drugs is often more effective than using a single effector. However, it is extremely challenging to identify potent drug combinations by trial and error because of the large number of possible combinations and the inherent complexity of the underlying biological network. With a closed-loop optimization modality, we experimentally demonstrate effective searching for potent drug combinations for controlling cellular functions through a large parametric space. Only tens of iterations out of one hundred thousand possible trials were needed to determine a potent combination of drugs for inhibiting vesicular stomatitis virus infection of NIH 3T3 fibroblasts. In addition, the drug combination reduced the required dosage by ≈10-fold compared with individual drugs. In another example, a potent mixture was identified in thirty iterations out of a possible million combinations of six cytokines that regulate the activity of nuclear factor kappa B in 293T cells. The closed-loop optimization approach possesses the potential of being an effective approach for manipulating a wide class of biological systems. PMID:18356295

  13. Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Li, Y.

    2016-12-01

    We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.

  14. Strategies for automatic processing of large aftershock sequences

    NASA Astrophysics Data System (ADS)

    Kvaerna, T.; Gibbons, S. J.

    2017-12-01

    Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.

  15. Iterative metal artefact reduction in CT: can dedicated algorithms improve image quality after spinal instrumentation?

    PubMed

    Aissa, J; Thomas, C; Sawicki, L M; Caspers, J; Kröpil, P; Antoch, G; Boos, J

    2017-05-01

    To investigate the value of dedicated computed tomography (CT) iterative metal artefact reduction (iMAR) algorithms in patients after spinal instrumentation. Post-surgical spinal CT images of 24 patients performed between March 2015 and July 2016 were retrospectively included. Images were reconstructed with standard weighted filtered back projection (WFBP) and with two dedicated iMAR algorithms (iMAR-Algo1, adjusted to spinal instrumentations and iMAR-Algo2, adjusted to large metallic hip implants) using a medium smooth kernel (B30f) and a sharp kernel (B70f). Frequencies of density changes were quantified to assess objective image quality. Image quality was rated subjectively by evaluating the visibility of critical anatomical structures including the central canal, the spinal cord, neural foramina, and vertebral bone. Both iMAR algorithms significantly reduced artefacts from metal compared with WFBP (p<0.0001). Results of subjective image analysis showed that both iMAR algorithms led to an improvement in visualisation of soft-tissue structures (median iMAR-Algo1=3; interquartile range [IQR]:1.5-3; iMAR-Algo2=4; IQR: 3.5-4) and bone structures (iMAR-Algo1=3; IQR:3-4; iMAR-Algo2=4; IQR:4-5) compared to WFBP (soft tissue: median 2; IQR: 0.5-2 and bone structures: median 2; IQR: 1-3; p<0.0001). Compared with iMAR-Algo1, objective artefact reduction and subjective visualisation of soft-tissue and bone structures were improved with iMAR-Algo2 (p<0.0001). Both iMAR algorithms reduced artefacts compared with WFBP, however, the iMAR algorithm with dedicated settings for large metallic implants was superior to the algorithm specifically adjusted to spinal implants. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  16. Inverse finite element methods for extracting elastic-poroviscoelastic properties of cartilage and other soft tissues from indentation

    NASA Astrophysics Data System (ADS)

    Namani, Ravi

    Mechanical properties are essential for understanding diseases that afflict various soft tissues, such as osteoarthritic cartilage and hypertension which alters cardiovascular arteries. Although the linear elastic modulus is routinely measured for hard materials, standard methods are not available for extracting the nonlinear elastic, linear elastic and time-dependent properties of soft tissues. Consequently, the focus of this work is to develop indentation methods for soft biological tissues; since analytical solutions are not available for the general context, finite element simulations are used. First, parametric studies of finite indentation of hyperelastic layers are performed to examine if indentation has the potential to identify nonlinear elastic behavior. To answer this, spherical, flat-ended conical and cylindrical tips are examined and the influence of thickness is exploited. Also the influence of the specimen/substrate boundary condition (slip or non-slip) is clarified. Second, a new inverse method---the hyperelastic extraction algorithm (HPE)---was developed to extract two nonlinear elastic parameters from the indentation force-depth data, which is the basic measurement in an indentation test. The accuracy of the extracted parameters and the influence of noise in measurements on this accuracy were obtained. This showed that the standard Berkovitch tip could only extract one parameter with sufficient accuracy, since the indentation force-depth curve has limited sensitivity to both nonlinear elastic parameters. Third, indentation methods for testing tissues from small animals were explored. New methods for flat-ended conical tips are derived. These account for practical test issues like the difficulty in locating the surface or soft specimens. Also, finite element simulations are explored to elucidate the influence of specimen curvature on the indentation force-depth curve. Fourth, the influence of inhomogeneity and material anisotropy on the extracted "average" linear elastic modulus was studied. The focus here is on murine tibial cartilage, since recent experiments have shown that the modulus measured by a 15 mum tip is considerably larger than that obtained from a 90 mum tip. It is shown that a depth-dependent modulus could give rise to such a size effect. Lastly, parametric studies were performed within the small strain setting to understand the influence of permeability and viscoelastic properties on the indentation stress-relaxation response. The focus here is on cartilage, and specific test protocols (single-step vs. multi-step stress relaxation) are explored. An inverse algorithm was developed to extract the poroviscoelastic parameters. A sensitivity study using this algorithm shows that the instantaneous elastic modulus (which is a measure of the viscous relaxation) can be extracted with very good accuracy, but the permeability and long-time relaxation constant cannot be extracted with good accuracy. The thesis concludes with implications of these studies. The potential and limitations of indentation tests for studying cartilage and other soft tissues is discussed.

  17. Serial turbo trellis coded modulation using a serially concatenated coder

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)

    2010-01-01

    Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.

  18. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less

  19. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction.

    PubMed

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A

    2016-04-01

    The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.

  20. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582

  1. Randomized controlled clinical study evaluating effectiveness and safety of a volume-stable collagen matrix compared to autogenous connective tissue grafts for soft tissue augmentation at implant sites.

    PubMed

    Thoma, Daniel S; Zeltner, Marco; Hilbe, Monika; Hämmerle, Christoph H F; Hüsler, Jürg; Jung, Ronald E

    2016-10-01

    To test whether or not the use of a collagen matrix (VCMX) results in short-term soft tissue volume increase at implant sites non-inferior to an autogenous subepithelial connective tissue graft (SCTG), and to evaluate safety and tissue integration of VCMX and SCTG. In 20 patients with a volume deficiency at single-tooth implant sites, soft tissue volume augmentation was performed randomly allocating VCMX or SCTG. Soft tissue thickness, patient-reported outcome measures (PROMs), and safety were assessed up to 90 days (FU-90). At FU-90 (abutment connection), tissue samples were obtained for histological analysis. Descriptive analysis was computed for both groups. Non-parametric tests were applied to test non-inferiority for the gain in soft tissue thickness at the occlusal site. Median soft tissue thickness increased between BL and FU-90 by 1.8 mm (Q1:0.5; Q3:2.0) (VCMX) (p = 0.018) and 0.5 mm (-1.0; 2.0) (SCTG) (p = 0.395) (occlusal) and by 1.0 mm (0.5; 2.0) (VCMX) (p = 0.074) and 1.5 mm (-2.0; 2.0) (SCTG) (p = 0.563) (buccal). Non-inferiority with a non-inferiority margin of 1 mm could be demonstrated (p = 0.020); the difference between the two group medians (1.3 mm) for occlusal sites indicated no relevant, but not significant superiority of VCMX versus SCTG (primary endpoint). Pain medication consumption and pain perceived were non-significantly higher in group SCTG up to day 3. Median physical pain (OHIP-14) at day 7 was 100% higher for SCTG than for VCMX. The histological analysis revealed well-integrated grafts. Soft tissue augmentation at implant sites resulted in a similar or higher soft tissue volume increase after 90 days for VCMX versus SCTG. PROMs did not reveal relevant differences between the two groups. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Algorithm for parametric community detection in networks.

    PubMed

    Bettinelli, Andrea; Hansen, Pierre; Liberti, Leo

    2012-07-01

    Modularity maximization is extensively used to detect communities in complex networks. It has been shown, however, that this method suffers from a resolution limit: Small communities may be undetectable in the presence of larger ones even if they are very dense. To alleviate this defect, various modifications of the modularity function have been proposed as well as multiresolution methods. In this paper we systematically study a simple model (proposed by Pons and Latapy [Theor. Comput. Sci. 412, 892 (2011)] and similar to the parametric model of Reichardt and Bornholdt [Phys. Rev. E 74, 016110 (2006)]) with a single parameter α that balances the fraction of within community edges and the expected fraction of edges according to the configuration model. An exact algorithm is proposed to find optimal solutions for all values of α as well as the corresponding successive intervals of α values for which they are optimal. This algorithm relies upon a routine for exact modularity maximization and is limited to moderate size instances. An agglomerative hierarchical heuristic is therefore proposed to address parametric modularity detection in large networks. At each iteration the smallest value of α for which it is worthwhile to merge two communities of the current partition is found. Then merging is performed and the data are updated accordingly. An implementation is proposed with the same time and space complexity as the well-known Clauset-Newman-Moore (CNM) heuristic [Phys. Rev. E 70, 066111 (2004)]. Experimental results on artificial and real world problems show that (i) communities are detected by both exact and heuristic methods for all values of the parameter α; (ii) the dendrogram summarizing the results of the heuristic method provides a useful tool for substantive analysis, as illustrated particularly on a Les Misérables data set; (iii) the difference between the parametric modularity values given by the exact method and those given by the heuristic is moderate; (iv) the heuristic version of the proposed parametric method, viewed as a modularity maximization tool, gives better results than the CNM heuristic for large instances.

  3. Swarm intelligence inspired shills and the evolution of cooperation.

    PubMed

    Duan, Haibin; Sun, Changhao

    2014-06-09

    Many hostile scenarios exist in real-life situations, where cooperation is disfavored and the collective behavior needs intervention for system efficiency improvement. Towards this end, the framework of soft control provides a powerful tool by introducing controllable agents called shills, who are allowed to follow well-designed updating rules for varying missions. Inspired by swarm intelligence emerging from flocks of birds, we explore here the dependence of the evolution of cooperation on soft control by an evolutionary iterated prisoner's dilemma (IPD) game staged on square lattices, where the shills adopt a particle swarm optimization (PSO) mechanism for strategy updating. We demonstrate that not only can cooperation be promoted by shills effectively seeking for potentially better strategies and spreading them to others, but also the frequency of cooperation could be arbitrarily controlled by choosing appropriate parameter settings. Moreover, we show that adding more shills does not contribute to further cooperation promotion, while assigning higher weights to the collective knowledge for strategy updating proves a efficient way to induce cooperative behavior. Our research provides insights into cooperation evolution in the presence of PSO-inspired shills and we hope it will be inspirational for future studies focusing on swarm intelligence based soft control.

  4. Nanotechnology regulation: a study in claims making.

    PubMed

    Malloy, Timothy F

    2011-01-25

    There appears to be consensus on the notion that the hazards of nanotechnology are a social problem in need of resolution, but much dispute remains over what that resolution should be. There are a variety of potential policy tools for tackling this challenge, including conventional direct regulation, self-regulation, tort liability, financial guarantees, and more. The literature in this area is replete with proposals embracing one or more of these tools, typically using conventional regulation as a foil in which its inadequacy is presented as justification for a new proposed approach. At its core, the existing literature raises a critical question: What is the most effective role of government as regulator in these circumstances? This article explores that question by focusing upon two policy approaches in particular: conventional regulation and self-regulation, often described as hard law and soft law, respectively. Drawing from the sociology of social problems, the article examines the soft law construction of the nanotechnology problem and the associated solutions, with emphasis on the claims-making strategies used. In particular, it critically examines the rhetoric and underlying grounds for the soft law approach. It also sets out the grounds and framework for an alternative construction and solution-the concept of iterative regulation.

  5. Experimental determination of frequency response function estimates for flexible joint industrial manipulators with serial kinematics

    NASA Astrophysics Data System (ADS)

    Saupe, Florian; Knoblach, Andreas

    2015-02-01

    Two different approaches for the determination of frequency response functions (FRFs) are used for the non-parametric closed loop identification of a flexible joint industrial manipulator with serial kinematics. The two applied experiment designs are based on low power multisine and high power chirp excitations. The main challenge is to eliminate disturbances of the FRF estimates caused by the numerous nonlinearities of the robot. For the experiment design based on chirp excitations, a simple iterative procedure is proposed which allows exploiting the good crest factor of chirp signals in a closed loop setup. An interesting synergy of the two approaches, beyond validation purposes, is pointed out.

  6. On a self-consistent representation of earth models, with an application to the computing of internal flattening

    NASA Astrophysics Data System (ADS)

    Denis, C.; Ibrahim, A.

    Self-consistent parametric earth models are discussed in terms of a flexible numerical code. The density profile of each layer is represented as a polynomial, and figures of gravity, mass, mean density, hydrostatic pressure, and moment of inertia are derived. The polynomial representation also allows computation of the first order flattening of the internal strata of some models, using a Gauss-Legendre quadrature with a rapidly converging iteration technique. Agreement with measured geophysical data is obtained, and algorithm for estimation of the geometric flattening for any equidense surface identified by its fractional radius is developed. The program can also be applied in studies of planetary and stellar models.

  7. Sparse Covariance Matrix Estimation With Eigenvalue Constraints

    PubMed Central

    LIU, Han; WANG, Lie; ZHAO, Tuo

    2014-01-01

    We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866

  8. Analysis of computer images in the presence of metals

    NASA Astrophysics Data System (ADS)

    Buzmakov, Alexey; Ingacheva, Anastasia; Prun, Victor; Nikolaev, Dmitry; Chukalina, Marina; Ferrero, Claudio; Asadchikov, Victor

    2018-04-01

    Artifacts caused by intensely absorbing inclusions are encountered in computed tomography via polychromatic scanning and may obscure or simulate pathologies in medical applications. To improve the quality of reconstruction if high-Z inclusions in presence, previously we proposed and tested with synthetic data an iterative technique with soft penalty mimicking linear inequalities on the photon-starved rays. This note reports a test at the tomographic laboratory set-up at the Institute of Crystallography FSRC "Crystallography and Photonics" RAS in which tomographic scans were successfully made of temporary tooth without inclusion and with Pb inclusion.

  9. Development and Validation of a Statistical Shape Modeling-Based Finite Element Model of the Cervical Spine Under Low-Level Multiple Direction Loading Conditions

    PubMed Central

    Bredbenner, Todd L.; Eliason, Travis D.; Francis, W. Loren; McFarland, John M.; Merkle, Andrew C.; Nicolella, Daniel P.

    2014-01-01

    Cervical spinal injuries are a significant concern in all trauma injuries. Recent military conflicts have demonstrated the substantial risk of spinal injury for the modern warfighter. Finite element models used to investigate injury mechanisms often fail to examine the effects of variation in geometry or material properties on mechanical behavior. The goals of this study were to model geometric variation for a set of cervical spines, to extend this model to a parametric finite element model, and, as a first step, to validate the parametric model against experimental data for low-loading conditions. Individual finite element models were created using cervical spine (C3–T1) computed tomography data for five male cadavers. Statistical shape modeling (SSM) was used to generate a parametric finite element model incorporating variability of spine geometry, and soft-tissue material property variation was also included. The probabilistic loading response of the parametric model was determined under flexion-extension, axial rotation, and lateral bending and validated by comparison to experimental data. Based on qualitative and quantitative comparison of the experimental loading response and model simulations, we suggest that the model performs adequately under relatively low-level loading conditions in multiple loading directions. In conclusion, SSM methods coupled with finite element analyses within a probabilistic framework, along with the ability to statistically validate the overall model performance, provide innovative and important steps toward describing the differences in vertebral morphology, spinal curvature, and variation in material properties. We suggest that these methods, with additional investigation and validation under injurious loading conditions, will lead to understanding and mitigating the risks of injury in the spine and other musculoskeletal structures. PMID:25506051

  10. Layout design-based research on optimization and assessment method for shipbuilding workshop

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Meng, Mei; Liu, Shuang

    2013-06-01

    The research study proposes to examine a three-dimensional visualization program, emphasizing on improving genetic algorithms through the optimization of a layout design-based standard and discrete shipbuilding workshop. By utilizing a steel processing workshop as an example, the principle of minimum logistic costs will be implemented to obtain an ideological equipment layout, and a mathematical model. The objectiveness is to minimize the total necessary distance traveled between machines. An improved control operator is implemented to improve the iterative efficiency of the genetic algorithm, and yield relevant parameters. The Computer Aided Tri-Dimensional Interface Application (CATIA) software is applied to establish the manufacturing resource base and parametric model of the steel processing workshop. Based on the results of optimized planar logistics, a visual parametric model of the steel processing workshop is constructed, and qualitative and quantitative adjustments then are applied to the model. The method for evaluating the results of the layout is subsequently established through the utilization of AHP. In order to provide a mode of reference to the optimization and layout of the digitalized production workshop, the optimized discrete production workshop will possess a certain level of practical significance.

  11. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  12. Spectral information enhancement using wavelet-based iterative filtering for in vivo gamma spectrometry.

    PubMed

    Paul, Sabyasachi; Sarkar, P K

    2013-04-01

    Use of wavelet transformation in stationary signal processing has been demonstrated for denoising the measured spectra and characterisation of radionuclides in the in vivo monitoring analysis, where difficulties arise due to very low activity level to be estimated in biological systems. The large statistical fluctuations often make the identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet-based noise filtering methodology has been developed for better detection of gamma peaks in noisy data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for noise rejection and an inverse transform after soft 'thresholding' over the generated coefficients. Analyses of in vivo monitoring data of (235)U and (238)U were carried out using this method without disturbing the peak position and amplitude while achieving a 3-fold improvement in the signal-to-noise ratio, compared with the original measured spectrum. When compared with other data-filtering techniques, the wavelet-based method shows the best results.

  13. Anisotropic elastic moduli reconstruction in transversely isotropic model using MRE

    NASA Astrophysics Data System (ADS)

    Song, Jiah; In Kwon, Oh; Seo, Jin Keun

    2012-11-01

    Magnetic resonance elastography (MRE) is an elastic tissue property imaging modality in which the phase-contrast based MRI imaging technique is used to measure internal displacement induced by a harmonically oscillating mechanical vibration. MRE has made rapid technological progress in the past decade and has now reached the stage of clinical use. Most of the research outcomes are based on the assumption of isotropy. Since soft tissues like skeletal muscles show anisotropic behavior, the MRE technique should be extended to anisotropic elastic property imaging. This paper considers reconstruction in a transversely isotropic model, which is the simplest case of anisotropy, and develops a new non-iterative reconstruction method for visualizing the elastic moduli distribution. This new method is based on an explicit representation formula using the Newtonian potential of measured displacement. Hence, the proposed method does not require iterations since it directly recovers the anisotropic elastic moduli. We perform numerical simulations in order to demonstrate the feasibility of the proposed method in recovering a two-dimensional anisotropic tensor.

  14. An eFTD-VP framework for efficiently generating patient-specific anatomically detailed facial soft tissue FE mesh for craniomaxillofacial surgery simulation

    PubMed Central

    Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime

    2017-01-01

    Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians’ need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change. PMID:29027022

  15. An eFTD-VP framework for efficiently generating patient-specific anatomically detailed facial soft tissue FE mesh for craniomaxillofacial surgery simulation.

    PubMed

    Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime; Liebschner, Michael A K; Xia, James J

    2018-04-01

    Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians' need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change.

  16. Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibbons, Steven J.; Kvaerna, Tormod; Harris, David B.

    We report aftershock sequences following very large earthquakes present enormous challenges to near-real-time generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase-association algorithms and a significant deterioration in the quality of underlying, fully automatic event bulletins. Current processing pipelines were designed a generation ago, and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams thatmore » are then scanned by a phase-association algorithm to form event hypotheses. We consider the scenario in which a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located, using a separate specially targeted semiautomatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid-search algorithm that may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase-association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove about half of the original detections that could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Lastly, further reductions in the number of detections in the parametric data streams are likely, using correlation and subspace detectors and/or empirical matched field processing.« less

  17. Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines

    DOE PAGES

    Gibbons, Steven J.; Kvaerna, Tormod; Harris, David B.; ...

    2016-06-08

    We report aftershock sequences following very large earthquakes present enormous challenges to near-real-time generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase-association algorithms and a significant deterioration in the quality of underlying, fully automatic event bulletins. Current processing pipelines were designed a generation ago, and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams thatmore » are then scanned by a phase-association algorithm to form event hypotheses. We consider the scenario in which a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located, using a separate specially targeted semiautomatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid-search algorithm that may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase-association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove about half of the original detections that could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Lastly, further reductions in the number of detections in the parametric data streams are likely, using correlation and subspace detectors and/or empirical matched field processing.« less

  18. Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes

    PubMed Central

    Berhane, Tedros M.; Lane, Charles R.; Wu, Qiusheng; Anenkhonov, Oleg A.; Chepinoga, Victor V.; Autrey, Bradley C.; Liu, Hongxing

    2018-01-01

    Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km2) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar’s chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection—which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes. PMID:29707381

  19. Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes.

    PubMed

    Berhane, Tedros M; Lane, Charles R; Wu, Qiusheng; Anenkhonov, Oleg A; Chepinoga, Victor V; Autrey, Bradley C; Liu, Hongxing

    2018-01-01

    Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km 2 ) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar's chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection-which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes.

  20. Refining patterns of joint hypermobility, habitus, and orthopedic traits in joint hypermobility syndrome and Ehlers-Danlos syndrome, hypermobility type.

    PubMed

    Morlino, Silvia; Dordoni, Chiara; Sperduti, Isabella; Venturini, Marina; Celletti, Claudia; Camerota, Filippo; Colombi, Marina; Castori, Marco

    2017-04-01

    Joint hypermobility syndrome (JHS) and Ehlers-Danlos syndrome, hypermobility type (EDS-HT) are two overlapping heritable disorders (JHS/EDS-HT) recognized by separated sets of diagnostic criteria and still lack a confirmatory test. This descriptive research was aimed at better characterizing the clinical phenotype of JHS/EDS-HT with focus on available diagnostic criteria, and in order to propose novel features and assessment strategies. One hundred and eighty-nine (163 females, 26 males; age: 2-73 years) patients from two Italian reference centers were investigated for Beighton score, range of motion in 21 additional joints, rate and sites of dislocations and sprains, recurrent soft-tissue injuries, tendon and muscle ruptures, body mass index, arm span/height ratio, wrist and thumb signs, and 12 additional orthopedic features. Rough rates were compared by age, sex, and handedness with a series of parametric and non-parametric tools. Multiple correspondence analysis was carried out for possible co-segregations of features. Beighton score and hypermobility at other joints were influenced by age at diagnosis. Rate and sites of joint instability complications did not vary according to age at diagnosis except for soft-tissue injuries. No major difference was registered by sex and dominant versus non-dominant body side. At multiple correspondence analysis, selected features tend to co-segregate in a dichotomous distribution. Dolichostenomelia and arachnodactyly segregated independently. This study pointed out a more protean musculoskeletal phenotype than previously considered according to available diagnostic criteria for JHS/EDS-HT. Our findings corroborated the need for a re-thinking of JHS/EDS-HT on clinical grounds in order to find better therapeutic and research strategies. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. Stability and dynamic analysis of a slender column with curved longitudinal stiffeners

    NASA Technical Reports Server (NTRS)

    Lake, Mark S.

    1989-01-01

    The results of a stability design study are presented for a slender column with curved longitudinal stiffeners for large space structure applications. Linear stability analyses are performed using a link-plate representation of the stiffeners to determine stiffener local buckling stresses. Results from a set of parametric analyses are used to determine an approximate explicit expression for stiffener local buckling in terms of its geometric parameters. This expression along with other equations governing column stability and mass are assembled into a determinate system describing minimum mass stiffened column design. An iterative solution is determined to solve this system and a computer program incorporating this routine is presented. Example design problems are presented which verify the solution accuracy and illustrate the implementation of the solution routine. Also, observations are made which lead to a greatly simplified first iteration design equation relating the percent increase in column mass to the percent increase in column buckling load. From this, generalizations are drawn as to the mass savings offered by the stiffened column concept. Finally, the percent increase in fundamental column vibration frequency due to the addition of deployable stiffeners is studied.

  2. RACLETTE: a model for evaluating the thermal response of plasma facing components to slow high power plasma transients. Part I: Theory and description of model capabilities

    NASA Astrophysics Data System (ADS)

    Raffray, A. René; Federici, Gianfranco

    1997-04-01

    RACLETTE (Rate Analysis Code for pLasma Energy Transfer Transient Evaluation), a comprehensive but relatively simple and versatile model, was developed to help in the design analysis of plasma facing components (PFCs) under 'slow' high power transients, such as those associated with plasma vertical displacement events. The model includes all the key surface heat transfer processes such as evaporation, melting, and radiation, and their interaction with the PFC block thermal response and the coolant behaviour. This paper represents part I of two sister and complementary papers. It covers the model description, calibration and validation, and presents a number of parametric analyses shedding light on and identifying trends in the PFC armour block response to high plasma energy deposition transients. Parameters investigated include the plasma energy density and deposition time, the armour thickness and the presence of vapour shielding effects. Part II of the paper focuses on specific design analyses of ITER plasma facing components (divertor, limiter, primary first wall and baffle), including improvements in the thermal-hydraulic modeling required for better understanding the consequences of high energy deposition transients in particular for the ITER limiter case.

  3. Iterative Structural and Functional Synergistic Resolution Recovery (iSFS-RR) Applied to PET-MR Images in Epilepsy

    NASA Astrophysics Data System (ADS)

    Silva-Rodríguez, J.; Cortés, J.; Rodríguez-Osorio, X.; López-Urdaneta, J.; Pardo-Montero, J.; Aguiar, P.; Tsoumpas, C.

    2016-10-01

    Structural Functional Synergistic Resolution Recovery (SFS-RR) is a technique that uses supplementary structural information from MR or CT to improve the spatial resolution of PET or SPECT images. This wavelet-based method may have a potential impact on the clinical decision-making of brain focal disorders such as refractory epilepsy, since it can produce images with better quantitative accuracy and enhanced detectability. In this work, a method for the iterative application of SFS-RR (iSFS-RR) was firstly developed and optimized in terms of convergence and input voxel size, and the corrected images were used for the diagnosis of 18 patients with refractory epilepsy. To this end, PET/MR images were clinically evaluated through visual inspection, atlas-based asymmetry indices (AIs) and SPM (Statistical Parametric Mapping) analysis, using uncorrected images and images corrected with SFS-RR and iSFS-RR. Our results showed that the sensitivity can be increased from 78% for uncorrected images, to 84% for SFS-RR and 94% for the proposed iSFS-RR. Thus, the proposed methodology has demonstrated the potential to improve the management of refractory epilepsy patients in the clinical routine.

  4. Strain induced parametric pumping of a domain wall and its depinning from a notch

    NASA Astrophysics Data System (ADS)

    Nepal, Rabindra; Gungordu, Utkan; Kovalev, Alexey

    Using Thiele's method and detailed micromagnetic simulations, we study resonant oscillation of a domain wall in a notch of a ferromagnetic nanowire due to the modulation of magnetic anisotropy by external AC strain. Such resonant oscillation results from the parametric pumping of domain wall by AC strain at frequency about double the free domain wall oscillation frequency, which is mainly determined by the perpendicular anisotropy and notch geometry. This effect leads to a substantial reduction in depinning field or current required to depin a domain wall from the notch, and offers a mechanism for efficient domain wall motion in a notched nanowire. Our theoretical model accounts for the pinning potential due to a notch by explicitly calculating ferromagnetic energy as a function of notch geometry parameters. We also find similar resonant domain wall oscillations and reduction in the domain wall depinning field or current due to surface acoustic wave in soft ferromagnetic nanowire without uniaxial anisotropy that energetically favors an in-plane domain wall. DOE Early Career Award DE-SC0014189 and DMR- 1420645.

  5. Assessment of conductor degradation in the ITER CS insert coil and implications for the ITER conductors

    NASA Astrophysics Data System (ADS)

    Mitchell, N.

    2007-01-01

    Nb3Sn cable in conduit-type conductors were expected to provide an efficient way of achieving large conductor currents at high field (up to 13 T) combined with good stability to electromagnetic disturbances due to the extensive helium contact area with the strands. Although ITER model coils successfully reached their design performance (Kato et al 2001 Fusion Eng. Des. 56/57 59-70), initial indications (Mitchell 2003 Fusion Eng. Des. 66-68 971-94) that there were unexplained performance shortfalls have been confirmed. Recent conductor tests (Pasztor et al 2004 IEEE Trans. Appl. Supercond. 14 1527-30) and modelling work (Mitchell 2005 Supercond. Sci. Technol. 18 396-404) suggest that the shortfalls are due to a combination of strand bending and filament fracture under the transverse magnetic loads. Using the new model, the extensive database from the ITER CS insert coil has been reassessed. A parametric fit based on a loss of filament area and n (the exponent of the power-law fit to the electric field) combined with a more rigorous consideration of the conductor field gradient has enabled the coil behaviour to be explained much more consistently than in earlier assessments, now fitting the Nb3Sn strain scaling laws when used with measurements of the conductor operating strain, including conditions when the insert coil current (and hence operating strain) were reversed. The coil superconducting performance also shows a fatigue-type behaviour consistent with recent measurements on conductor samples (Martovetsky et al 2005 IEEE Trans. Appl. Supercond. 15 1367-70). The ITER conductor design has already been modified compared to the CS insert, to increase the margin and provide increased resistance to the degradation, by using a steel jacket to provide thermal pre-compression to reduce tensile strain levels, reducing the void fraction from 36% to 33% and increasing the non-copper material by 25%. Test results are not yet available for the new design and performance predictions at present rely on models with limited verification.

  6. Fetoscopic Open Neural Tube Defect Repair: Development and Refinement of a Two-Port, Carbon Dioxide Insufflation Technique.

    PubMed

    Belfort, Michael A; Whitehead, William E; Shamshirsaz, Alireza A; Bateni, Zhoobin H; Olutoye, Oluyinka O; Olutoye, Olutoyin A; Mann, David G; Espinoza, Jimmy; Williams, Erin; Lee, Timothy C; Keswani, Sundeep G; Ayres, Nancy; Cassady, Christopher I; Mehollin-Ray, Amy R; Sanz Cortes, Magdalena; Carreras, Elena; Peiro, Jose L; Ruano, Rodrigo; Cass, Darrell L

    2017-04-01

    To describe development of a two-port fetoscopic technique for spina bifida repair in the exteriorized, carbon dioxide-filled uterus and report early results of two cohorts of patients: the first 15 treated with an iterative technique and the latter 13 with a standardized technique. This was a retrospective cohort study (2014-2016). All patients met Management of Myelomeningocele Study selection criteria. The intraoperative approach was iterative in the first 15 patients and was then standardized. Obstetric, maternal, fetal, and early neonatal outcomes were compared. Standard parametric and nonparametric tests were used as appropriate. Data for 28 patients (22 endoscopic only, four hybrid, two abandoned) are reported, but only those with a complete fetoscopic repair were analyzed (iterative technique [n=10] compared with standardized technique [n=12]). Maternal demographics and gestational age (median [range]) at fetal surgery (25.4 [22.9-25.9] compared with 24.8 [24-25.6] weeks) were similar, but delivery occurred at 35.9 (26-39) weeks of gestation with the iterative technique compared with 39 (35.9-40) weeks of gestation with the standardized technique (P<.01). Duration of surgery (267 [107-434] compared with 246 [206-333] minutes), complication rates, preterm prelabor rupture of membranes rates (4/12 [33%] compared with 1/10 [10%]), and vaginal delivery rates (5/12 [42%] compared with 6/10 [60%]) were not statistically different in the iterative and standardized techniques, respectively. In 6 of 12 (50%) compared with 1 of 10 (10%), respectively (P=.07), there was leakage of cerebrospinal fluid from the repair site at birth. Management of Myelomeningocele Study criteria for hydrocephalus-death at discharge were met in 9 of 12 (75%) and 3 of 10 (30%), respectively, and 7 of 12 (58%) compared with 2 of 10 (20%) have been treated for hydrocephalus to date. These latter differences were not statistically significant. Fetoscopic open neural tube defect repair does not appear to increase maternal-fetal complications as compared with repair by hysterotomy, allows for vaginal delivery, and may reduce long-term maternal risks. ClinicalTrials.gov, https://clinicaltrials.gov, NCT02230072.

  7. Effect of facial material softness and applied force on face mask dead volume, face mask seal, and inhaled corticosteroid delivery through an idealized infant replica.

    PubMed

    Carrigy, Nicholas B; O'Reilly, Connor; Schmitt, James; Noga, Michelle; Finlay, Warren H

    2014-08-01

    During the aerosol delivery device design and optimization process, in vitro lung dose (LD) measurements are often performed using soft face models, which may provide a more clinically relevant representation of face mask dead volume (MDV) and face mask seal (FMS) than hard face models. However, a comparison of MDV, FMS, and LD for hard and soft face models is lacking. Metal, silicone, and polyurethane represented hard, soft, and very soft facial materials, respectively. MDV was measured using a water displacement technique. FMS was measured using a valved holding chamber (VHC) flow rate technique. The LD of beclomethasone dipropionate (BDP) delivered via a 100-μg Qvar® pressurized metered dose inhaler with AeroChamber Plus® Flow-Vu® VHC and Small Mask, defined as that which passes through the nasal airways of the idealized infant geometry, was measured using a bias tidal flow system with a filter. MDV, FMS, and LD were measured at 1.5 lb and 3.5 lb of applied force. A mathematical model was used to predict LD based on experimental measurements of MDV and FMS. Experimental BDP LD measurements for ABS, silicone, and polyurethane at 1.5 lb were 0.9 (0.6) μg, 2.4 (1.9) μg, and 19.3 (0.9) μg, respectively. At 3.5 lb, the respective LD was 10.0 (1.5) μg, 13.8 (1.4) μg, and 14.2 (0.9) μg. Parametric analysis with the mathematical model showed that differences in FMS between face models had a greater impact on LD than differences in MDV. The use of soft face models resulted in higher LD than hard face models, with a greater difference at 1.5 lb than at 3.5 lb. A lack of a FMS led to decreased dose consistency; therefore, a sealant should be used when measuring LD with a hard ABS or soft silicone face model at 1.5 lb of applied force or less.

  8. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling

    PubMed Central

    Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W. F.; Jeelani, Owase; Dunaway, David J.; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face. PMID:29742139

  9. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling.

    PubMed

    Knoops, Paul G M; Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W F; Jeelani, Owase; Dunaway, David J; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face.

  10. Multisite EPR oximetry from multiple quadrature harmonics.

    PubMed

    Ahmad, R; Som, S; Johnson, D H; Zweier, J L; Kuppusamy, P; Potter, L C

    2012-01-01

    Multisite continuous wave (CW) electron paramagnetic resonance (EPR) oximetry using multiple quadrature field modulation harmonics is presented. First, a recently developed digital receiver is used to extract multiple harmonics of field modulated projection data. Second, a forward model is presented that relates the projection data to unknown parameters, including linewidth at each site. Third, a maximum likelihood estimator of unknown parameters is reported using an iterative algorithm capable of jointly processing multiple quadrature harmonics. The data modeling and processing are applicable for parametric lineshapes under nonsaturating conditions. Joint processing of multiple harmonics leads to 2-3-fold acceleration of EPR data acquisition. For demonstration in two spatial dimensions, both simulations and phantom studies on an L-band system are reported. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Conceptual design and structural analysis of the spectroscopy of the atmosphere using far infrared emission (SAFIRE) instrument

    NASA Technical Reports Server (NTRS)

    Moses, Robert W.; Averill, Robert D.

    1992-01-01

    The conceptual design and structural analysis for the Spectroscopy of the Atmosphere using Far Infrared Emission (SAFIRE) Instrument are provided. SAFIRE, which is an international effort, is proposed for the Earth Observing Systems (EOS) program for atmospheric ozone studies. A concept was developed which meets mission requirements and is the product of numerous parametric studies and design/analysis iterations. Stiffness, thermal stability, and weight constraints led to a graphite/epoxy composite design for the optical bench and supporting struts. The structural configuration was determined by considering various mounting arrangements of the optical, cryo, and electronic components. Quasi-static, thermal, modal, and dynamic response analyses were performed, and the results are presented for the selected configuration.

  12. Elliptic Double-Box Integrals: Massless Scattering Amplitudes beyond Polylogarithms

    NASA Astrophysics Data System (ADS)

    Bourjaily, Jacob L.; McLeod, Andrew J.; Spradlin, Marcus; von Hippel, Matt; Wilhelm, Matthias

    2018-03-01

    We derive an analytic representation of the ten-particle, two-loop double-box integral as an elliptic integral over weight-three polylogarithms. To obtain this form, we first derive a fourfold, rational (Feynman-)parametric representation for the integral, expressed directly in terms of dual-conformally invariant cross ratios; from this, the desired form is easily obtained. The essential features of this integral are illustrated by means of a simplified toy model, and we attach the relevant expressions for both integrals in ancillary files. We propose a normalization for such integrals that renders all of their polylogarithmic degenerations pure, and we discuss the need for a new "symbology" of mixed iterated elliptic and polylogarithmic integrals in order to bring them to a more canonical form.

  13. Geometry of the q-exponential distribution with dependent competing risks and accelerated life testing

    NASA Astrophysics Data System (ADS)

    Zhang, Fode; Shi, Yimin; Wang, Ruibing

    2017-02-01

    In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).

  14. Path integral learning of multidimensional movement trajectories

    NASA Astrophysics Data System (ADS)

    André, João; Santos, Cristina; Costa, Lino

    2013-10-01

    This paper explores the use of Path Integral Methods, particularly several variants of the recent Path Integral Policy Improvement (PI2) algorithm in multidimensional movement parametrized policy learning. We rely on Dynamic Movement Primitives (DMPs) to codify discrete and rhythmic trajectories, and apply the PI2-CMA and PIBB methods in the learning of optimal policy parameters, according to different cost functions that inherently encode movement objectives. Additionally we merge both of these variants and propose the PIBB-CMA algorithm, comparing all of them with the vanilla version of PI2. From the obtained results we conclude that PIBB-CMA surpasses all other methods in terms of convergence speed and iterative final cost, which leads to an increased interest in its application to more complex robotic problems.

  15. A time dependent difference theory for sound propagation in ducts with flow. [characteristic of inlet and exhaust ducts of turbofan engines

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.

    1979-01-01

    A time dependent numerical solution of the linearized continuity and momentum equation was developed for sound propagation in a two dimensional straight hard or soft wall duct with a sheared mean flow. The time dependent governing acoustic difference equations and boundary conditions were developed along with a numerical determination of the maximum stable time increments. A harmonic noise source radiating into a quiescent duct was analyzed. This explicit iteration method then calculated stepwise in real time to obtain the transient as well as the steady state solution of the acoustic field. Example calculations were presented for sound propagation in hard and soft wall ducts, with no flow and plug flow. Although the problem with sheared flow was formulated and programmed, sample calculations were not examined. The time dependent finite difference analysis was found to be superior to the steady state finite difference and finite element techniques because of shorter solution times and the elimination of large matrix storage requirements.

  16. Parameter identification of hyperelastic material properties of the heel pad based on an analytical contact mechanics model of a spherical indentation.

    PubMed

    Suzuki, Ryo; Ito, Kohta; Lee, Taeyong; Ogihara, Naomichi

    2017-01-01

    Accurate identification of the material properties of the plantar soft tissue is important for computer-aided analysis of foot pathologies and design of therapeutic footwear interventions based on subject-specific models of the foot. However, parameter identification of the hyperelastic material properties of plantar soft tissues usually requires an inverse finite element analysis due to the lack of a practical contact model of the indentation test. In the present study, we derive an analytical contact model of a spherical indentation test in order to directly estimate the material properties of the plantar soft tissue. Force-displacement curves of the heel pads are obtained through an indentation experiment. The experimental data are fit to the analytical stress-strain solution of the spherical indentation in order to obtain the parameters. A spherical indentation approach successfully predicted the non-linear material properties of the heel pad without iterative finite element calculation. The force-displacement curve obtained in the present study was found to be situated lower than those identified in previous studies. The proposed framework for identifying the hyperelastic material parameters may facilitate the development of subject-specific FE modeling of the foot for possible clinical and ergonomic applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE PAGES

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse; ...

    2017-09-19

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  18. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  19. Characterization of onset of parametric decay instability of lower hybrid waves

    NASA Astrophysics Data System (ADS)

    Baek, S. G.; Bonoli, P. T.; Parker, R. R.; Shiraiwa, S.; Wallace, G. M.; Porkolab, M.; Takase, Y.; Brunner, D.; Faust, I. C.; Hubbard, A. E.; LaBombard, B. L.; Lau, C.

    2014-02-01

    The goal of the lower hybrid current drive (LHCD) program on Alcator C-Mod is to develop and optimize ITER-relevant steady-state plasmas by controlling the current density profile. Using a 4×16 waveguide array, over 1 MW of LH power at 4.6 GHz has been successfully coupled to the plasmas. However, current drive efficiency precipitously drops as the line averaged density (n¯e) increases above 1020m-3. Previous numerical work shows that the observed loss of current drive efficiency in high density plasmas stems from the interactions of LH waves with edge/scrape-off layer (SOL) plasmas [Wallace et al., Physics of Plasmas 19, 062505 (2012)]. Recent observations of parametric decay instability (PDI) suggest that non-linear effects should be also taken into account to fully characterize the parasitic loss mechanisms [Baek et al., Plasma Phys. Control Fusion 55, 052001 (2013)]. In particular, magnetic configuration dependent ion cyclotron PDIs are observed using the probes near n¯e≈1.2×1020m-3. In upper single null plasmas, ion cyclotron PDI is excited near the low field side separatrix with no apparent indications of pump depletion. The observed ion cyclotron PDI becomes weaker in inner wall limited plasmas, which exhibit enhanced current drive effects. In lower single null plasmas, the dominant ion cyclotron PDI is excited near the high field side (HFS) separatrix. In this case, the onset of PDI is correlated with the decrease in pump power, indicating that pump wave power propagates to the HFS and is absorbed locally near the HFS separatrix. Comparing the observed spectra with the homogeneous growth rate calculation indicates that the observed ion cyclotron instability is excited near the plasma periphery. The incident pump power density is high enough to overcome the collisional homogeneous threshold. For C-Mod plasma parameters, the growth rate of ion sound quasi-modes is found to be typically smaller by an order of magnitude than that of ion cyclotron quasi-modes. When considering the convective threshold near the plasma edge, convective growth due to parallel coupling rather than perpendicular coupling is likely to be responsible for the observed strength of the sidebands. To demonstrate the improved LHCD efficiency in high density plasmas, an additional launcher has been designed. In conjunction with the existing launcher, this new launcher will allow access to an ITER-like high single pass absorption regime, replicating the JLH(r) expected in ITER. The predictions from the time domain discharge scenarios, in which the two launchers are used, will be also presented.

  20. Fast l₁-SPIRiT compressed sensing parallel imaging MRI: scalable parallel implementation and clinically feasible runtime.

    PubMed

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-06-01

    We present l₁-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative self-consistent parallel imaging (SPIRiT). Like many iterative magnetic resonance imaging reconstructions, l₁-SPIRiT's image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing l₁-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of l₁-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT spoiled gradient echo (SPGR) sequence with up to 8× acceleration via Poisson-disc undersampling in the two phase-encoded directions.

  1. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    PubMed Central

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  2. Calibration free beam hardening correction for cardiac CT perfusion imaging

    NASA Astrophysics Data System (ADS)

    Levi, Jacob; Fahmi, Rachid; Eck, Brendan L.; Fares, Anas; Wu, Hao; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    Myocardial perfusion imaging using CT (MPI-CT) and coronary CTA have the potential to make CT an ideal noninvasive gate-keeper for invasive coronary angiography. However, beam hardening artifacts (BHA) prevent accurate blood flow calculation in MPI-CT. BH Correction (BHC) methods require either energy-sensitive CT, not widely available, or typically a calibration-based method. We developed a calibration-free, automatic BHC (ABHC) method suitable for MPI-CT. The algorithm works with any BHC method and iteratively determines model parameters using proposed BHA-specific cost function. In this work, we use the polynomial BHC extended to three materials. The image is segmented into soft tissue, bone, and iodine images, based on mean HU and temporal enhancement. Forward projections of bone and iodine images are obtained, and in each iteration polynomial correction is applied. Corrections are then back projected and combined to obtain the current iteration's BHC image. This process is iterated until cost is minimized. We evaluate the algorithm on simulated and physical phantom images and on preclinical MPI-CT data. The scans were obtained on a prototype spectral detector CT (SDCT) scanner (Philips Healthcare). Mono-energetic reconstructed images were used as the reference. In the simulated phantom, BH streak artifacts were reduced from 12+/-2HU to 1+/-1HU and cupping was reduced by 81%. Similarly, in physical phantom, BH streak artifacts were reduced from 48+/-6HU to 1+/-5HU and cupping was reduced by 86%. In preclinical MPI-CT images, BHA was reduced from 28+/-6 HU to less than 4+/-4HU at peak enhancement. Results suggest that the algorithm can be used to reduce BHA in conventional CT and improve MPI-CT accuracy.

  3. Determination of optimal imaging settings for urolithiasis CT using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR): a physical human phantom study

    PubMed Central

    Choi, Se Y; Ahn, Seung H; Choi, Jae D; Kim, Jung H; Lee, Byoung-Il; Kim, Jeong-In

    2016-01-01

    Objective: The purpose of this study was to compare CT image quality for evaluating urolithiasis using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR) according to various scan parameters and radiation doses. Methods: A 5 × 5 × 5 mm3 uric acid stone was placed in a physical human phantom at the level of the pelvis. 3 tube voltages (120, 100 and 80 kV) and 4 current–time products (100, 70, 30 and 15 mAs) were implemented in 12 scans. Each scan was reconstructed with FBP, statistical IR (Levels 5–7) and knowledge-based IMR (soft-tissue Levels 1–3). The radiation dose, objective image quality and signal-to-noise ratio (SNR) were evaluated, and subjective assessments were performed. Results: The effective doses ranged from 0.095 to 2.621 mSv. Knowledge-based IMR showed better objective image noise and SNR than did FBP and statistical IR. The subjective image noise of FBP was worse than that of statistical IR and knowledge-based IMR. The subjective assessment scores deteriorated after a break point of 100 kV and 30 mAs. Conclusion: At the setting of 100 kV and 30 mAs, the radiation dose can be decreased by approximately 84% while keeping the subjective image assessment. Advances in knowledge: Patients with urolithiasis can be evaluated with ultralow-dose non-enhanced CT using a knowledge-based IMR algorithm at a substantially reduced radiation dose with the imaging quality preserved, thereby minimizing the risks of radiation exposure while providing clinically relevant diagnostic benefits for patients. PMID:26577542

  4. Hard and Soft Constraints in Reliability-Based Design Optimization

    NASA Technical Reports Server (NTRS)

    Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.

  5. Soft gluon evolution and non-global logarithms

    NASA Astrophysics Data System (ADS)

    Martínez, René Ángeles; De Angelis, Matthew; Forshaw, Jeffrey R.; Plätzer, Simon; Seymour, Michael H.

    2018-05-01

    We consider soft-gluon evolution at the amplitude level. Our evolution algorithm applies to generic hard-scattering processes involving any number of coloured partons and we present a reformulation of the algorithm in such a way as to make the cancellation of infrared divergences explicit. We also emphasise the special role played by a Lorentz-invariant evolution variable, which coincides with the transverse momentum of the latest emission in a suitably defined dipole zero-momentum frame. Handling large colour matrices presents the most significant challenge to numerical implementations and we present a means to expand systematically about the leading colour approximation. Specifically, we present a systematic procedure to calculate the resulting colour traces, which is based on the colour flow basis. Identifying the leading contribution leads us to re-derive the Banfi-Marchesini-Smye equation. However, our formalism is more general and can systematically perform resummation of contributions enhanced by the t'Hooft coupling α s N ˜ 1, along with successive perturbations that are parametrically suppressed by powers of 1 /N . We also discuss how our approach relates to earlier work.

  6. [ital n]=5 to [ital n]=5 soft-x-ray emission of uranium in a high-temperature low-density tokamak plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fournier, K.B.; Finkenthal, M.; Lippmann, S.

    1994-11-01

    The soft-x-ray uranium emission in the 60--200-A range recorded from a high-temperature ([similar to]1 keV) low-density ([similar to]10[sup 13] cm[sup [minus]3]) tokamak plasma has been analyzed by comparison with theoretical level structure and line-intensity calculations. In an extension of previous work [Finkenthal [ital et] [ital al]., Phys. Rev. A 45, 5846 (1992)], theoretical U XXV, U XXX, U XXXI, and U XXXII [ital n]=5 to [ital n]=5 spectra have been computed for the relevant plasma parameters. Fully relativistic parametric potential computer codes have been used for the [ital ab] [ital initio] atomic-structure calculations, and electron-impact excitation rates have been computedmore » in the distorted-wave approximation. 5[ital s]-5[ital p] spectral lines and quasicontinua of U XXX, U XXXI, and U XXXII are identified in the 165--200-A wavelength band. An unambiguous line identification is hampered by theoretical uncertainties and the blending of emission from adjacent charge states.« less

  7. CT dose reduction using Automatic Exposure Control and iterative reconstruction: A chest paediatric phantoms study.

    PubMed

    Greffier, Joël; Pereira, Fabricio; Macri, Francesco; Beregi, Jean-Paul; Larbi, Ahmed

    2016-04-01

    To evaluate the impact of Automatic Exposure Control (AEC) on radiation dose and image quality in paediatric chest scans (MDCT), with or without iterative reconstruction (IR). Three anthropomorphic phantoms representing children aged one, five and 10-year-old were explored using AEC system (CARE Dose 4D) with five modulation strength options. For each phantom, six acquisitions were carried out: one with fixed mAs (without AEC) and five each with different modulation strength. Raw data were reconstructed with Filtered Back Projection (FBP) and with two distinct levels of IR using soft and strong kernels. Dose reduction and image quality indices (Noise, SNR, CNR) were measured in lung and soft tissues. Noise Power Spectrum (NPS) was evaluated with a Catphan 600 phantom. The use of AEC produced a significant dose reduction (p<0.01) for all anthropomorphic sizes employed. According to the modulation strength applied, dose delivered was reduced from 43% to 91%. This pattern led to significantly increased noise (p<0.01) and reduced SNR and CNR (p<0.01). However, IR was able to improve these indices. The use of AEC/IR preserved image quality indices with a lower dose delivered. Doses were reduced from 39% to 58% for the one-year-old phantom, from 46% to 63% for the five-year-old phantom, and from 58% to 74% for the 10-year-old phantom. In addition, AEC/IR changed the patterns of NPS curves in amplitude and in spatial frequency. In chest paediatric MDCT, the use of AEC with IR allows one to obtain a significant dose reduction while maintaining constant image quality indices. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    NASA Astrophysics Data System (ADS)

    Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin

    2018-06-01

    Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.

  9. Biomechanics of the soft-palate in sleep apnea patients with polycystic ovarian syndrome.

    PubMed

    Subramaniam, Dhananjay Radhakrishnan; Arens, Raanan; Wagshul, Mark E; Sin, Sanghun; Wootton, David M; Gutmark, Ephraim J

    2018-05-17

    Highly compliant tissue supporting the pharynx and low muscle tone enhance the possibility of upper airway occlusion in children with obstructive sleep apnea (OSA). The present study describes subject-specific computational modeling of flow-induced velopharyngeal narrowing in a female child with polycystic ovarian syndrome (PCOS) with OSA and a non-OSA control. Anatomically accurate three-dimensional geometries of the upper airway and soft-palate were reconstructed for both subjects using magnetic resonance (MR) images. A fluid-structure interaction (FSI) shape registration analysis was performed using subject-specific values of flow rate to iteratively compute the biomechanical properties of the soft-palate. The optimized shear modulus for the control was 38 percent higher than the corresponding value for the OSA patient. The proposed computational FSI model was then employed for planning surgical treatment for the apneic subject. A virtual surgery comprising of a combined adenoidectomy, palatoplasty and genioglossus advancement was performed to estimate the resulting post-operative patterns of airflow and tissue displacement. Maximum flow velocity and velopharyngeal resistance decreased by 80 percent and 66 percent respectively following surgery. Post-operative flow-induced forces on the anterior and posterior faces of the soft-palate were equilibrated and the resulting magnitude of tissue displacement was 63 percent lower compared to the pre-operative case. Results from this pilot study indicate that FSI computational modeling can be employed to characterize the mechanical properties of pharyngeal tissue and evaluate the effectiveness of various upper airway surgeries prior to their application. Copyright © 2018. Published by Elsevier Ltd.

  10. Capture and isotopic exchange method for water and hydrogen isotopes on zeolite catalysts up to technical scale for pre-study of processing highly tritiated water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michling, R.; Braun, A.; Cristescu, I.

    2015-03-15

    Highly tritiated water (HTW) may be generated at ITER by various processes and, due to the excessive radio toxicity, the self-radiolysis and the exceedingly corrosive property of HTW, a potential hazard is associated with its storage and process. Therefore, the capture and exchange method for HTW utilizing Molecular Sieve Beds (MSB) was investigated in view of adsorption capacity, isotopic exchange performance and process parameters. For the MSB, different types of zeolite were selected. All zeolite materials were additionally coated with platinum. The following work comprised the selection of the most efficient zeolite candidate based on detailed parametric studies during themore » H{sub 2}/D{sub 2}O laboratory scale exchange experiments (about 25 g zeolite per bed) at the Tritium Laboratory Karlsruhe (TLK). For the zeolite, characterization analytical techniques such as Infrared Spectroscopy, Thermogravimetry and online mass spectrometry were implemented. Followed by further investigation of the selected zeolite catalyst under full technical operation, a MSB (about 22 kg zeolite) was processed with hydrogen flow rates up to 60 mol*h{sup -1} and deuterated water loads up to 1.6 kg in view of later ITER processing of arising HTW. (authors)« less

  11. Statistical detection of geographic clusters of resistant Escherichia coli in a regional network with WHONET and SaTScan.

    PubMed

    Park, Rachel; O'Brien, Thomas F; Huang, Susan S; Baker, Meghan A; Yokoe, Deborah S; Kulldorff, Martin; Barrett, Craig; Swift, Jamie; Stelling, John

    2016-11-01

    While antimicrobial resistance threatens the prevention, treatment, and control of infectious diseases, systematic analysis of routine microbiology laboratory test results worldwide can alert new threats and promote timely response. This study explores statistical algorithms for recognizing geographic clustering of multi-resistant microbes within a healthcare network and monitoring the dissemination of new strains over time. Escherichia coli antimicrobial susceptibility data from a three-year period stored in WHONET were analyzed across ten facilities in a healthcare network utilizing SaTScan's spatial multinomial model with two models for defining geographic proximity. We explored geographic clustering of multi-resistance phenotypes within the network and changes in clustering over time. Geographic clustering identified from both latitude/longitude and non-parametric facility groupings geographic models were similar, while the latter was offers greater flexibility and generalizability. Iterative application of the clustering algorithms suggested the possible recognition of the initial appearance of invasive E. coli ST131 in the clinical database of a single hospital and subsequent dissemination to others. Systematic analysis of routine antimicrobial resistance susceptibility test results supports the recognition of geographic clustering of microbial phenotypic subpopulations with WHONET and SaTScan, and iterative application of these algorithms can detect the initial appearance in and dissemination across a region prompting early investigation, response, and containment measures.

  12. Analytical Formulation for Sizing and Estimating the Dimensions and Weight of Wind Turbine Hub and Drivetrain Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Parsons, T.; King, R.

    This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less

  13. A Deterministic Interfacial Cyclic Oxidation Spalling Model. Part 1; Model Development and Parametric Response

    NASA Technical Reports Server (NTRS)

    Smialek, James L.

    2002-01-01

    An equation has been developed to model the iterative scale growth and spalling process that occurs during cyclic oxidation of high temperature materials. Parabolic scale growth and spalling of a constant surface area fraction have been assumed. Interfacial spallation of the only the thickest segments was also postulated. This simplicity allowed for representation by a simple deterministic summation series. Inputs are the parabolic growth rate constant, the spall area fraction, oxide stoichiometry, and cycle duration. Outputs include the net weight change behavior, as well as the total amount of oxygen and metal consumed, the total amount of oxide spalled, and the mass fraction of oxide spalled. The outputs all follow typical well-behaved trends with the inputs and are in good agreement with previous interfacial models.

  14. Probabilistic Reasoning for Plan Robustness

    NASA Technical Reports Server (NTRS)

    Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.

    2005-01-01

    A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.

  15. Multi-parametric surface plasmon resonance platform for studying liposome-serum interactions and protein corona formation.

    PubMed

    Kari, Otto K; Rojalin, Tatu; Salmaso, Stefano; Barattin, Michela; Jarva, Hanna; Meri, Seppo; Yliperttula, Marjo; Viitala, Tapani; Urtti, Arto

    2017-04-01

    When nanocarriers are administered into the blood circulation, a complex biomolecular layer known as the "protein corona" associates with their surface. Although the drivers of corona formation are not known, it is widely accepted that this layer mediates biological interactions of the nanocarrier with its surroundings. Label-free optical methods can be used to study protein corona formation without interfering with its dynamics. We demonstrate the proof-of-concept for a multi-parametric surface plasmon resonance (MP-SPR) technique in monitoring the formation of a protein corona on surface-immobilized liposomes subjected to flowing 100 % human serum. We observed the formation of formulation-dependent "hard" and "soft" coronas with distinct refractive indices, layer thicknesses, and surface mass densities. MP-SPR was also employed to determine the affinity (K D ) of a complement system molecule (C3b) with cationic liposomes with and without polyethylene glycol. Tendency to create a thick corona correlated with a higher affinity of opsonin C3b for the surface. The label-free platform provides a fast and robust preclinical tool for tuning nanocarrier surface architecture and composition to control protein corona formation.

  16. Nonlinear dynamics near resonances of a rotor-active magnetic bearings system with 16-pole legs and time varying stiffness

    NASA Astrophysics Data System (ADS)

    Wu, R. Q.; Zhang, W.; Yao, M. H.

    2018-02-01

    In this paper, we analyze the complicated nonlinear dynamics of rotor-active magnetic bearings (rotor-AMB) with 16-pole legs and the time varying stiffness. The magnetic force with 16-pole legs is obtained by applying the electromagnetic theory. The governing equation of motion for rotor-active magnetic bearings is derived by using the Newton's second law. The resulting dimensionless equation of motion for the rotor-AMB system is expressed as a two-degree-of-freedom nonlinear system including the parametric excitation, quadratic and cubic nonlinearities. The averaged equation of the rotor-AMB system is obtained by using the method of multiple scales when the primary parametric resonance and 1/2 subharmonic resonance are taken into account. From the frequency-response curves, it is found that there exist the phenomena of the soft-spring type nonlinearity and the hardening-spring type nonlinearity in the rotor-AMB system. The effects of different parameters on the nonlinear dynamic behaviors of the rotor-AMB system are investigated. The numerical results indicate that the periodic, quasi-periodic and chaotic motions occur alternately in the rotor-AMB system.

  17. Multi-dimensional PIC-simulations of parametric instabilities for shock-ignition conditions

    NASA Astrophysics Data System (ADS)

    Riconda, C.; Weber, S.; Klimo, O.; Héron, A.; Tikhonchuk, V. T.

    2013-11-01

    Laser-plasma interaction is investigated for conditions relevant for the shock-ignition (SI) scheme of inertial confinement fusion using two-dimensional particle-in-cell (PIC) simulations of an intense laser beam propagating in a hot, large-scale, non-uniform plasma. The temporal evolution and interdependence of Raman- (SRS), and Brillouin- (SBS), side/backscattering as well as Two-Plasmon-Decay (TPD) are studied. TPD is developing in concomitance with SRS creating a broad spectrum of plasma waves near the quarter-critical density. They are rapidly saturated due to plasma cavitation within a few picoseconds. The hot electron spectrum created by SRS and TPD is relatively soft, limited to energies below one hundred keV.

  18. Modeling the effects of wind tunnel wall absorption on the acoustic radiation characteristics of propellers

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Eversman, W.

    1986-01-01

    Finite element theory is used to calculate the acoustic field of a propeller in a soft walled circular wind tunnel and to compare the radiation patterns to the same propeller in free space. Parametric solutions are present for a 'Gutin' propeller for a variety of flow Mach numbers, admittance values at the wall, microphone position locations, and propeller to duct radius ratios. Wind tunnel boundary layer is not included in this analysis. For wall admittance nearly equal to the characteristic value of free space, the free field and ducted propeller models agree in pressure level and directionality. In addition, the need for experimentally mapping the acoustic field is discussed.

  19. Merging NLO multi-jet calculations with improved unitarization

    NASA Astrophysics Data System (ADS)

    Bellm, Johannes; Gieseke, Stefan; Plätzer, Simon

    2018-03-01

    We present an algorithm to combine multiple matrix elements at LO and NLO with a parton shower. We build on the unitarized merging paradigm. The inclusion of higher orders and multiplicities reduce the scale uncertainties for observables sensitive to hard emissions, while preserving the features of inclusive quantities. The combination allows further soft and collinear emissions to be predicted by the all-order parton-shower approximation. We inspect the impact of terms that are formally but not parametrically negligible. We present results for a number of collider observables where multiple jets are observed, either on their own or in the presence of additional uncoloured particles. The algorithm is implemented in the event generator Herwig.

  20. Modeling the effects of wind tunnel wall absorption on the acoustic radiation characteristics of propellers

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Eversman, W.

    1986-01-01

    Finite element theory is used to calculate the acoustic field of a propeller in a soft walled circular wind tunnel and to compare the radiation patterns to the same propeller in free space. Parametric solutions are present for a "Gutin" propeller for a variety of flow Mach numbers, admittance values at the wall, microphone position locations, and propeller to duct radius ratios. Wind tunnel boundary layer is not included in this analysis. For wall admittance nearly equal to the characteristic value of free space, the free field and ducted propeller models agree in pressure level and directionality. In addition, the need for experimentally mapping the acoustic field is discussed.

  1. Flap-Lag-Torsion Stability in Forward Flight

    NASA Technical Reports Server (NTRS)

    Panda, B.; Chopra, I.

    1985-01-01

    An aeroelastic stability of three-degree flap-lag-torsion blade in forward flight is examined. Quasisteady aerodynamics with a dynamic inflow model is used. The nonlinear time dependent periodic blade response is calculated using an iterative procedure based on Floquet theory. The periodic perturbation equations are solved for stability using Floquet transition matrix theory as well as constant coefficient approximation in the fixed reference frame. Results are presented for both stiff-inplane and soft-inplane blade configurations. The effects of several parameters on blade stability are examined, including structural coupling, pitch-flap and pitch-lag coupling, torsion stiffness, steady inflow distribution, dynamic inflow, blade response solution and constant coefficient approximation.

  2. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery.

    PubMed

    Hashemi, SayedMasoud; Song, William Y; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G; Ruschin, Mark

    2017-04-07

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm -1 which was increased to 1.2 mm -1 by SDIR, at half maximum.

  3. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery

    NASA Astrophysics Data System (ADS)

    Hashemi, SayedMasoud; Song, William Y.; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G.; Ruschin, Mark

    2017-04-01

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm-1 which was increased to 1.2 mm-1 by SDIR, at half maximum.

  4. A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Louis A; Mason, John J.

    We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less

  5. Interior tomography from differential phase contrast data via Hilbert transform based on spline functions

    NASA Astrophysics Data System (ADS)

    Yang, Qingsong; Cong, Wenxiang; Wang, Ge

    2016-10-01

    X-ray phase contrast imaging is an important mode due to its sensitivity to subtle features of soft biological tissues. Grating-based differential phase contrast (DPC) imaging is one of the most promising phase imaging techniques because it works with a normal x-ray tube of a large focal spot at a high flux rate. However, a main obstacle before this paradigm shift is the fabrication of large-area gratings of a small period and a high aspect ratio. Imaging large objects with a size-limited grating results in data truncation which is a new type of the interior problem. While the interior problem was solved for conventional x-ray CT through analytic extension, compressed sensing and iterative reconstruction, the difficulty for interior reconstruction from DPC data lies in that the implementation of the system matrix requires the differential operation on the detector array, which is often inaccurate and unstable in the case of noisy data. Here, we propose an iterative method based on spline functions. The differential data are first back-projected to the image space. Then, a system matrix is calculated whose components are the Hilbert transforms of the spline bases. The system matrix takes the whole image as an input and outputs the back-projected interior data. Prior information normally assumed for compressed sensing is enforced to iteratively solve this inverse problem. Our results demonstrate that the proposed algorithm can successfully reconstruct an interior region of interest (ROI) from the differential phase data through the ROI.

  6. All gates lead to smoking: the 'gateway theory', e-cigarettes and the remaking of nicotine.

    PubMed

    Bell, Kirsten; Keane, Helen

    2014-10-01

    The idea that drug use in 'softer' forms leads to 'harder' drug use lies at the heart of the gateway theory, one of the most influential models of drug use of the twentieth century. Although hotly contested, the notion of the 'gateway drug' continues to rear its head in discussions of drug use--most recently in the context of electronic cigarettes. Based on a critical reading of a range of texts, including scholarly literature and media reports, we explore the history and gestation of the gateway theory, highlighting the ways in which intersections between academic, media and popular accounts actively produced the concept. Arguing that the theory has been critical in maintaining the distinction between 'soft' and 'hard' drugs, we turn to its distinctive iteration in the context of debates about e-cigarettes. We show that the notion of the 'gateway' has been transformed from a descriptive to a predictive model, one in which nicotine is constituted as simultaneously 'soft' and 'hard'--as both relatively innocuous and incontrovertibly harmful. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Numerical study on 3D composite morphing actuators

    NASA Astrophysics Data System (ADS)

    Oishi, Kazuma; Saito, Makoto; Anandan, Nishita; Kadooka, Kevin; Taya, Minoru

    2015-04-01

    There are a number of actuators using the deformation of electroactive polymer (EAP), where fewer papers seem to have focused on the performance of 3D morphing actuators based on the analytical approach, due mainly to their complexity. The present paper introduces a numerical analysis approach on the large scale deformation and motion of a 3D half dome shaped actuator composed of thin soft membrane (passive material) and EAP strip actuators (EAP active coupon with electrodes on both surfaces), where the locations of the active EAP strips is a key parameter. Simulia/Abaqus Static and Implicit analysis code, whose main feature is the high precision contact analysis capability among structures, are used focusing on the whole process of the membrane to touch and wrap around the object. The unidirectional properties of the EAP coupon actuator are used as input data set for the material properties for the simulation and the verification of our numerical model, where the verification is made as compared to the existing 2D solution. The numerical results can demonstrate the whole deformation process of the membrane to wrap around not only smooth shaped objects like a sphere or an egg, but also irregularly shaped objects. A parametric study reveals the proper placement of the EAP coupon actuators, with the modification of the dome shape to induce the relevant large scale deformation. The numerical simulation for the 3D soft actuators shown in this paper could be applied to a wider range of soft 3D morphing actuators.

  8. WE-EF-207-07: Dual Energy CT with One Full Scan and a Second Sparse-View Scan Using Structure Preserving Iterative Reconstruction (SPIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less

  9. Characterization of onset of parametric decay instability of lower hybrid waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baek, S. G.; Bonoli, P. T.; Parker, R. R.

    2014-02-12

    The goal of the lower hybrid current drive (LHCD) program on Alcator C-Mod is to develop and optimize ITER-relevant steady-state plasmas by controlling the current density profile. Using a 4×16 waveguide array, over 1 MW of LH power at 4.6 GHz has been successfully coupled to the plasmas. However, current drive efficiency precipitously drops as the line averaged density (nМ„{sub e}) increases above 10{sup 20}m{sup −3}. Previous numerical work shows that the observed loss of current drive efficiency in high density plasmas stems from the interactions of LH waves with edge/scrape-off layer (SOL) plasmas [Wallace et al., Physics of Plasmasmore » 19, 062505 (2012)]. Recent observations of parametric decay instability (PDI) suggest that non-linear effects should be also taken into account to fully characterize the parasitic loss mechanisms [Baek et al., Plasma Phys. Control Fusion 55, 052001 (2013)]. In particular, magnetic configuration dependent ion cyclotron PDIs are observed using the probes near nМ„{sub e}≈1.2×10{sup 20}m{sup −3}. In upper single null plasmas, ion cyclotron PDI is excited near the low field side separatrix with no apparent indications of pump depletion. The observed ion cyclotron PDI becomes weaker in inner wall limited plasmas, which exhibit enhanced current drive effects. In lower single null plasmas, the dominant ion cyclotron PDI is excited near the high field side (HFS) separatrix. In this case, the onset of PDI is correlated with the decrease in pump power, indicating that pump wave power propagates to the HFS and is absorbed locally near the HFS separatrix. Comparing the observed spectra with the homogeneous growth rate calculation indicates that the observed ion cyclotron instability is excited near the plasma periphery. The incident pump power density is high enough to overcome the collisional homogeneous threshold. For C-Mod plasma parameters, the growth rate of ion sound quasi-modes is found to be typically smaller by an order of magnitude than that of ion cyclotron quasi-modes. When considering the convective threshold near the plasma edge, convective growth due to parallel coupling rather than perpendicular coupling is likely to be responsible for the observed strength of the sidebands. To demonstrate the improved LHCD efficiency in high density plasmas, an additional launcher has been designed. In conjunction with the existing launcher, this new launcher will allow access to an ITER-like high single pass absorption regime, replicating the J{sub LH}(r) expected in ITER. The predictions from the time domain discharge scenarios, in which the two launchers are used, will be also presented.« less

  10. Validation of 3D documentation of palatal soft tissue shape, color, and irregularity with intraoral scanning.

    PubMed

    Deferm, Julie T; Schreurs, Ruud; Baan, Frank; Bruggink, Robin; Merkx, Matthijs A W; Xi, Tong; Bergé, Stefaan J; Maal, Thomas J J

    2018-04-01

    The purpose of this study was to assess the feasibility of 3D intraoral scanning for documentation of palatal soft tissue by evaluating the accuracy of shape, color, and curvature. Intraoral scans of ten participants' upper dentition and palate were acquired with the TRIOS® 3D intraoral scanner by two observers. Conventional impressions were taken and digitized as a gold standard. The resulting surface models were aligned using an Iterative Closest Point approach. The absolute distance measurements between the intraoral models and the digitized impression were used to quantify the trueness and precision of intraoral scanning. The mean color of the palatal soft tissue was extracted in HSV (hue, saturation, value) format to establish the color precision. Finally, the mean curvature of the surface models was calculated and used for surface irregularity. Mean average distance error between the conventional impression models and the intraoral models was 0.02 ± 0.07 mm (p = 0.30). Mean interobserver color difference was - 0.08 ± 1.49° (p = 0.864), 0.28 ± 0.78% (p = 0.286), and 0.30 ± 1.14% (p = 0.426) for respectively hue, saturation, and value. The interobserver differences for overall and maximum surface irregularity were 0.01 ± 0.03 and 0.00 ± 0.05 mm. This study supports the hypothesis that the intraoral scan can perform a 3D documentation of palatal soft tissue in terms of shape, color, and curvature. An intraoral scanner can be an objective tool, adjunctive to the clinical examination of the palatal tissue.

  11. Virtuality and transverse momentum dependence of the pion distribution amplitude

    DOE PAGES

    Radyushkin, Anatoly V.

    2016-03-08

    We describe basics of a new approach to transverse momentum dependence in hard exclusive processes. We develop it in application to the transition process γ*γ → π 0 at the handbag level. Our starting point is coordinate representation for matrix elements of operators (in the simplest case, bilocal O (0,z)) describing a hadron with momentum p. Treated as functions of (pz) and z 2, they are parametrized through virtuality distribution amplitudes (VDA) Φ(x,σ), with x being Fourier-conjugate to (pz) and σ Laplace-conjugate to z 2. For intervals with z + = 0, we introduce the transverse momentum distribution amplitude (TMDA)more » ψ(x, k), and write it in terms of VDA Φ(x,σ). The results of covariant calculations, written in terms of Φ(x, σ) are converted into expressions involving ψ(x, k). Starting with scalar toy models, we extend the analysis onto the case of spin-1/2 quarks and QCD. We propose simple models for soft VDAs/TMDAs, and use them for comparison of handbag results with experimental (BaBar and BELLE) data on the pion transition form factor. Furthermore, we discuss how one can generate high-k tails from primordial soft distributions.« less

  12. Low Velocity Earth-Penetration Test and Analysis

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Jones, Yvonne; Knight, Norman F., Jr.; Kellas, Sotiris

    2001-01-01

    Modeling and simulation of structural impacts into soil continue to challenge analysts to develop accurate material models and detailed analytical simulations to predict the soil penetration event. This paper discusses finite element modeling of a series of penetrometer drop tests into soft clay. Parametric studies are performed with penetrometers of varying diameters, masses, and impact speeds to a maximum of 45 m/s. Parameters influencing the simulation such as the contact penalty factor and the material model representing the soil are also studied. An empirical relationship between key parameters is developed and is shown to correlate experimental and analytical results quite well. The results provide preliminary design guidelines for Earth impact that may be useful for future space exploration sample return missions.

  13. An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai

    Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.

  14. Results of neutron irradiation of GEM detector for plasma radiation detection

    NASA Astrophysics Data System (ADS)

    Jednorog, S.; Bienkowska, B.; Chernyshova, M.; Łaszynska, E.; Prokopowicz, R.; Ziołkowski, A.

    2015-09-01

    The detecting devices dedicated for plasma monitoring will be exposed for massive fluxes of neutron, photons as well as other rays that are components of fusion reactions and their product interactions with plasma itself or surroundings. In result detecting module metallic components will be activated becoming a source of radiation. Moreover, electronics components could change their electronic properties. The prototype GEM detector constructed for monitoring soft X-ray radiation in ITER oriented tokamaks was used for plasma monitoring during experimental campaign on tokamak ASDEX Upgrade. After that it became a source of gamma radiation caused by neutrons. The present work contains description of detector activation in the laboratory conditions.

  15. Conceptual Design and Analysis of Cold Mass Support of the CS3U Feeder for the ITER

    NASA Astrophysics Data System (ADS)

    Zhu, Yinfeng; Song, Yuntao; Zhang, Yuanbin; Wang, Zhongwei

    2013-06-01

    In the International Thermonuclear Experimental Reactor (ITER) project, the feeders are one of the most important and critical systems. To convey the power supply and the coolant for the central solenoid (CS) magnet, 6 sets of CS feeders are employed, which consist mainly of an in-cryostat feeder (ICF), a cryostat feed-through (CFT), an S-bend box (SBB), and a coil terminal box (CTB). To compensate the displacements of the internal components of the CS feeders during operation, sliding cold mass supports consisting of a sled plate, a cylindrical support, a thermal shield, and an external ring are developed. To check the strength of the developed cold mass supports of the CS3U feeder, electromagnetic analysis of the two superconducting busbars is performed by using the CATIA V5 and ANSYS codes based on parametric technology. Furthermore, the thermal-structural coupling analysis is performed based on the obtained results, except for the stress concentration, and the max. stress intensity is lower than the allowable stress of the selected material. It is found that the conceptual design of the cold mass support can satisfy the required functions under the worst case of normal working conditions. All these performed activities will provide a firm technical basis for the engineering design and development of cold mass supports.

  16. Statistical detection of geographic clusters of resistant Escherichia coli in a regional network with WHONET and SaTScan

    PubMed Central

    Park, Rachel; O'Brien, Thomas F.; Huang, Susan S.; Baker, Meghan A.; Yokoe, Deborah S.; Kulldorff, Martin; Barrett, Craig; Swift, Jamie; Stelling, John

    2016-01-01

    Objectives While antimicrobial resistance threatens the prevention, treatment, and control of infectious diseases, systematic analysis of routine microbiology laboratory test results worldwide can alert new threats and promote timely response. This study explores statistical algorithms for recognizing geographic clustering of multi-resistant microbes within a healthcare network and monitoring the dissemination of new strains over time. Methods Escherichia coli antimicrobial susceptibility data from a three-year period stored in WHONET were analyzed across ten facilities in a healthcare network utilizing SaTScan's spatial multinomial model with two models for defining geographic proximity. We explored geographic clustering of multi-resistance phenotypes within the network and changes in clustering over time. Results Geographic clustering identified from both latitude/longitude and non-parametric facility groupings geographic models were similar, while the latter was offers greater flexibility and generalizability. Iterative application of the clustering algorithms suggested the possible recognition of the initial appearance of invasive E. coli ST131 in the clinical database of a single hospital and subsequent dissemination to others. Conclusion Systematic analysis of routine antimicrobial resistance susceptibility test results supports the recognition of geographic clustering of microbial phenotypic subpopulations with WHONET and SaTScan, and iterative application of these algorithms can detect the initial appearance in and dissemination across a region prompting early investigation, response, and containment measures. PMID:27530311

  17. Stress free configuration of the human eye.

    PubMed

    Elsheikh, Ahmed; Whitford, Charles; Hamarashid, Rosti; Kassem, Wael; Joda, Akram; Büchler, Philippe

    2013-02-01

    Numerical simulations of eye globes often rely on topographies that have been measured in vivo using devices such as the Pentacam or OCT. The topographies, which represent the form of the already stressed eye under the existing intraocular pressure, introduce approximations in the analysis. The accuracy of the simulations could be improved if either the stress state of the eye under the effect of intraocular pressure is determined, or the stress-free form of the eye estimated prior to conducting the analysis. This study reviews earlier attempts to address this problem and assesses the performance of an iterative technique proposed by Pandolfi and Holzapfel [1], which is both simple to implement and promises high accuracy in estimating the eye's stress-free form. A parametric study has been conducted and demonstrated reliance of the error level on the level of flexibility of the eye model, especially in the cornea region. However, in all cases considered 3-4 analysis iterations were sufficient to produce a stress-free form with average errors in node location <10(-6)mm and a maximal error <10(-4)mm. This error level, which is similar to what has been achieved with other methods and orders of magnitude lower than the accuracy of current clinical topography systems, justifies the use of the technique as a pre-processing step in ocular numerical simulations. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  18. Stable sequential Kuhn-Tucker theorem in iterative form or a regularized Uzawa algorithm in a regular nonlinear programming problem

    NASA Astrophysics Data System (ADS)

    Sumin, M. I.

    2015-06-01

    A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.

  19. Research on the Integration of Bionic Geometry Modeling and Simulation of Robot Foot Based on Characteristic Curve

    NASA Astrophysics Data System (ADS)

    He, G.; Zhu, H.; Xu, J.; Gao, K.; Zhu, D.

    2017-09-01

    The bionic research of shape is an important aspect of the research on bionic robot, and its implementation cannot be separated from the shape modeling and numerical simulation of the bionic object, which is tedious and time-consuming. In order to improve the efficiency of shape bionic design, the feet of animals living in soft soil and swamp environment are taken as bionic objects, and characteristic skeleton curve, section curve, joint rotation variable, position and other parameters are used to describe the shape and position information of bionic object’s sole, toes and flipper. The geometry modeling of the bionic object is established by using the parameterization of characteristic curves and variables. Based on this, the integration framework of parametric modeling and finite element modeling, dynamic analysis and post-processing of sinking process in soil is proposed in this paper. The examples of bionic ostrich foot and bionic duck foot are also given. The parametric modeling and integration technique can achieve rapid improved design based on bionic object, and it can also greatly improve the efficiency and quality of robot foot bionic design, and has important practical significance to improve the level of bionic design of robot foot’s shape and structure.

  20. Neutrinoless double-β decay in effective field theory: The light-Majorana neutrino-exchange mechanism

    NASA Astrophysics Data System (ADS)

    Cirigliano, Vincenzo; Dekens, Wouter; Mereghetti, Emanuele; Walker-Loud, André

    2018-06-01

    We present the first chiral effective theory derivation of the neutrinoless double-β decay n n →p p potential induced by light Majorana neutrino exchange. The effective-field-theory framework has allowed us to identify and parametrize short- and long-range contributions previously missed in the literature. These contributions cannot be absorbed into parametrizations of the single-nucleon form factors. Starting from the quark and gluon level, we perform the matching onto chiral effective field theory and subsequently onto the nuclear potential. To derive the nuclear potential mediating neutrinoless double-β decay, the hard, soft, and potential neutrino modes must be integrated out. This is performed through next-to-next-to-leading order in the chiral power counting, in both the Weinberg and pionless schemes. At next-to-next-to-leading order, the amplitude receives additional contributions from the exchange of ultrasoft neutrinos, which can be expressed in terms of nuclear matrix elements of the weak current and excitation energies of the intermediate nucleus. These quantities also control the two-neutrino double-β decay amplitude. Finally, we outline strategies to determine the low-energy constants that appear in the potentials, by relating them to electromagnetic couplings and/or by matching to lattice QCD calculations.

  1. TPS-HAMMER: improving HAMMER registration algorithm by soft correspondence matching and thin-plate splines based deformation interpolation.

    PubMed

    Wu, Guorong; Yap, Pew-Thian; Kim, Minjeong; Shen, Dinggang

    2010-02-01

    We present an improved MR brain image registration algorithm, called TPS-HAMMER, which is based on the concepts of attribute vectors and hierarchical landmark selection scheme proposed in the highly successful HAMMER registration algorithm. We demonstrate that TPS-HAMMER algorithm yields better registration accuracy, robustness, and speed over HAMMER owing to (1) the employment of soft correspondence matching and (2) the utilization of thin-plate splines (TPS) for sparse-to-dense deformation field generation. These two aspects can be integrated into a unified framework to refine the registration iteratively by alternating between soft correspondence matching and dense deformation field estimation. Compared with HAMMER, TPS-HAMMER affords several advantages: (1) unlike the Gaussian propagation mechanism employed in HAMMER, which can be slow and often leaves unreached blotches in the deformation field, the deformation interpolation in the non-landmark points can be obtained immediately with TPS in our algorithm; (2) the smoothness of deformation field is preserved due to the nice properties of TPS; (3) possible misalignments can be alleviated by allowing the matching of the landmarks with a number of possible candidate points and enforcing more exact matches in the final stages of the registration. Extensive experiments have been conducted, using the original HAMMER as a comparison baseline, to validate the merits of TPS-HAMMER. The results show that TPS-HAMMER yields significant improvement in both accuracy and speed, indicating high applicability for the clinical scenario. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  2. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  3. Bayesian soft X-ray tomography using non-stationary Gaussian Processes.

    PubMed

    Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  4. Accurate tissue characterization in low-dose CT imaging with pure iterative reconstruction.

    PubMed

    Murphy, Kevin P; McLaughlin, Patrick D; Twomey, Maria; Chan, Vincent E; Moloney, Fiachra; Fung, Adrian J; Chan, Faimee E; Kao, Tafline; O'Neill, Siobhan B; Watson, Benjamin; O'Connor, Owen J; Maher, Michael M

    2017-04-01

    We assess the ability of low-dose hybrid iterative reconstruction (IR) and 'pure' model-based IR (MBIR) images to maintain accurate Hounsfield unit (HU)-determined tissue characterization. Standard-protocol (SP) and low-dose modified-protocol (MP) CTs were contemporaneously acquired in 34 Crohn's disease patients referred for CT. SP image reconstruction was via the manufacturer's recommendations (60% FBP, filtered back projection; 40% ASiR, Adaptive Statistical iterative Reconstruction; SP-ASiR40). MP data sets underwent four reconstructions (100% FBP; 40% ASiR; 70% ASiR; MBIR). Three observers measured tissue volumes using HU thresholds for fat, soft tissue and bone/contrast on each data set. Analysis was via SPSS. Inter-observer agreement was strong for 1530 datapoints (rs > 0.9). MP-MBIR tissue volume measurement was superior to other MP reconstructions and closely correlated with the reference SP-ASiR40 images for all tissue types. MP-MBIR superiority was most marked for fat volume calculation - close SP-ASiR40 and MP-MBIR Bland-Altman plot correlation was seen with the lowest average difference (336 cm 3 ) when compared with other MP reconstructions. Hounsfield unit-determined tissue volume calculations from MP-MBIR images resulted in values comparable to SP-ASiR40 calculations and values that are superior to MP-ASiR images. Accuracy of estimation of volume of tissues (e.g. fat) using segmentation software on low-dose CT images appears optimal when reconstructed with pure IR. © 2016 The Royal Australian and New Zealand College of Radiologists.

  5. MO-F-CAMPUS-J-05: Toward MRI-Only Radiotherapy: Novel Tissue Segmentation and Pseudo-CT Generation Techniques Based On T1 MRI Sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aouadi, S; McGarry, M; Hammoud, R

    Purpose: To develop and validate a 4 class tissue segmentation approach (air cavities, background, bone and soft-tissue) on T1 -weighted brain MRI and to create a pseudo-CT for MRI-only radiation therapy verification. Methods: Contrast-enhanced T1-weighted fast-spin-echo sequences (TR = 756ms, TE= 7.152ms), acquired on a 1.5T GE MRI-Simulator, are used.MRIs are firstly pre-processed to correct for non uniformity using the non parametric, non uniformity intensity normalization algorithm. Subsequently, a logarithmic inverse scaling log(1/image) is applied, prior to segmentation, to better differentiate bone and air from soft-tissues. Finally, the following method is enrolled to classify intensities into air cavities, background, bonemore » and soft-tissue:Thresholded region growing with seed points in image corners is applied to get a mask of Air+Bone+Background. The background is, afterward, separated by the scan-line filling algorithm. The air mask is extracted by morphological opening followed by a post-processing based on knowledge about air regions geometry. The remaining rough bone pre-segmentation is refined by applying 3D geodesic active contours; bone segmentation evolves by the sum of internal forces from contour geometry and external force derived from image gradient magnitude.Pseudo-CT is obtained by assigning −1000HU to air and background voxels, performing linear mapping of soft-tissue MR intensities in [-400HU, 200HU] and inverse linear mapping of bone MR intensities in [200HU, 1000HU]. Results: Three brain patients having registered MRI and CT are used for validation. CT intensities classification into 4 classes is performed by thresholding. Dice and misclassification errors are quantified. Correct classifications for soft-tissue, bone, and air are respectively 89.67%, 77.8%, and 64.5%. Dice indices are acceptable for bone (0.74) and soft-tissue (0.91) but low for air regions (0.48). Pseudo-CT produces DRRs with acceptable clinical visual agreement to CT-based DRR. Conclusion: The proposed approach makes it possible to use T1-weighted MRI to generate accurate pseudo-CT from 4-class segmentation.« less

  6. Development, Evaluation, and Sensitivity Analysis of Parametric Finite Element Whole-Body Human Models in Side Impacts.

    PubMed

    Hwang, Eunjoo; Hu, Jingwen; Chen, Cong; Klein, Katelyn F; Miller, Carl S; Reed, Matthew P; Rupp, Jonathan D; Hallman, Jason J

    2016-11-01

    Occupant stature and body shape may have significant effects on injury risks in motor vehicle crashes, but the current finite element (FE) human body models (HBMs) only represent occupants with a few sizes and shapes. Our recent studies have demonstrated that, by using a mesh morphing method, parametric FE HBMs can be rapidly developed for representing a diverse population. However, the biofidelity of those models across a wide range of human attributes has not been established. Therefore, the objectives of this study are 1) to evaluate the accuracy of HBMs considering subject-specific geometry information, and 2) to apply the parametric HBMs in a sensitivity analysis for identifying the specific parameters affecting body responses in side impact conditions. Four side-impact tests with two male post-mortem human subjects (PMHSs) were selected to evaluate the accuracy of the geometry and impact responses of the morphed HBMs. For each PMHS test, three HBMs were simulated to compare with the test results: the original Total Human Model for Safety (THUMS) v4.01 (O-THUMS), a parametric THUMS (P-THUMS), and a subject-specific THUMS (S-THUMS). The P-THUMS geometry was predicted from only age, sex, stature, and BMI using our statistical geometry models of skeleton and body shape, while the S-THUMS geometry was based on each PMHS's CT data. The simulation results showed a preliminary trend that the correlations between the PTHUMS- predicted impact responses and the four PMHS tests (mean-CORA: 0.84, 0.78, 0.69, 0.70) were better than those between the O-THUMS and the normalized PMHS responses (mean-CORA: 0.74, 0.72, 0.55, 0.63), while they are similar to the correlations between S-THUMS and the PMHS tests (mean-CORA: 0.85, 0.85, 0.67, 0.72). The sensitivity analysis using the PTHUMS showed that, in side impact conditions, the HBM skeleton and body shape geometries as well as the body posture were more important in modeling the occupant impact responses than the bone and soft tissue material properties and the padding stiffness with the given parameter ranges. More investigations are needed to further support these findings.

  7. An Off-Grid Turbo Channel Estimation Algorithm for Millimeter Wave Communications.

    PubMed

    Han, Lingyi; Peng, Yuexing; Wang, Peng; Li, Yonghui

    2016-09-22

    The bandwidth shortage has motivated the exploration of the millimeter wave (mmWave) frequency spectrum for future communication networks. To compensate for the severe propagation attenuation in the mmWave band, massive antenna arrays can be adopted at both the transmitter and receiver to provide large array gains via directional beamforming. To achieve such array gains, channel estimation (CE) with high resolution and low latency is of great importance for mmWave communications. However, classic super-resolution subspace CE methods such as multiple signal classification (MUSIC) and estimation of signal parameters via rotation invariant technique (ESPRIT) cannot be applied here due to RF chain constraints. In this paper, an enhanced CE algorithm is developed for the off-grid problem when quantizing the angles of mmWave channel in the spatial domain where off-grid problem refers to the scenario that angles do not lie on the quantization grids with high probability, and it results in power leakage and severe reduction of the CE performance. A new model is first proposed to formulate the off-grid problem. The new model divides the continuously-distributed angle into a quantized discrete grid part, referred to as the integral grid angle, and an offset part, termed fractional off-grid angle. Accordingly, an iterative off-grid turbo CE (IOTCE) algorithm is proposed to renew and upgrade the CE between the integral grid part and the fractional off-grid part under the Turbo principle. By fully exploiting the sparse structure of mmWave channels, the integral grid part is estimated by a soft-decoding based compressed sensing (CS) method called improved turbo compressed channel sensing (ITCCS). It iteratively updates the soft information between the linear minimum mean square error (LMMSE) estimator and the sparsity combiner. Monte Carlo simulations are presented to evaluate the performance of the proposed method, and the results show that it enhances the angle detection resolution greatly.

  8. A Coupled Experiment-finite Element Modeling Methodology for Assessing High Strain Rate Mechanical Response of Soft Biomaterials.

    PubMed

    Prabhu, Rajkumar; Whittington, Wilburn R; Patnaik, Sourav S; Mao, Yuxiong; Begonia, Mark T; Williams, Lakiesha N; Liao, Jun; Horstemeyer, M F

    2015-05-18

    This study offers a combined experimental and finite element (FE) simulation approach for examining the mechanical behavior of soft biomaterials (e.g. brain, liver, tendon, fat, etc.) when exposed to high strain rates. This study utilized a Split-Hopkinson Pressure Bar (SHPB) to generate strain rates of 100-1,500 sec(-1). The SHPB employed a striker bar consisting of a viscoelastic material (polycarbonate). A sample of the biomaterial was obtained shortly postmortem and prepared for SHPB testing. The specimen was interposed between the incident and transmitted bars, and the pneumatic components of the SHPB were activated to drive the striker bar toward the incident bar. The resulting impact generated a compressive stress wave (i.e. incident wave) that traveled through the incident bar. When the compressive stress wave reached the end of the incident bar, a portion continued forward through the sample and transmitted bar (i.e. transmitted wave) while another portion reversed through the incident bar as a tensile wave (i.e. reflected wave). These waves were measured using strain gages mounted on the incident and transmitted bars. The true stress-strain behavior of the sample was determined from equations based on wave propagation and dynamic force equilibrium. The experimental stress-strain response was three dimensional in nature because the specimen bulged. As such, the hydrostatic stress (first invariant) was used to generate the stress-strain response. In order to extract the uniaxial (one-dimensional) mechanical response of the tissue, an iterative coupled optimization was performed using experimental results and Finite Element Analysis (FEA), which contained an Internal State Variable (ISV) material model used for the tissue. The ISV material model used in the FE simulations of the experimental setup was iteratively calibrated (i.e. optimized) to the experimental data such that the experiment and FEA strain gage values and first invariant of stresses were in good agreement.

  9. A Coupled Experiment-finite Element Modeling Methodology for Assessing High Strain Rate Mechanical Response of Soft Biomaterials

    PubMed Central

    Prabhu, Rajkumar; Whittington, Wilburn R.; Patnaik, Sourav S.; Mao, Yuxiong; Begonia, Mark T.; Williams, Lakiesha N.; Liao, Jun; Horstemeyer, M. F.

    2015-01-01

    This study offers a combined experimental and finite element (FE) simulation approach for examining the mechanical behavior of soft biomaterials (e.g. brain, liver, tendon, fat, etc.) when exposed to high strain rates. This study utilized a Split-Hopkinson Pressure Bar (SHPB) to generate strain rates of 100-1,500 sec-1. The SHPB employed a striker bar consisting of a viscoelastic material (polycarbonate). A sample of the biomaterial was obtained shortly postmortem and prepared for SHPB testing. The specimen was interposed between the incident and transmitted bars, and the pneumatic components of the SHPB were activated to drive the striker bar toward the incident bar. The resulting impact generated a compressive stress wave (i.e. incident wave) that traveled through the incident bar. When the compressive stress wave reached the end of the incident bar, a portion continued forward through the sample and transmitted bar (i.e. transmitted wave) while another portion reversed through the incident bar as a tensile wave (i.e. reflected wave). These waves were measured using strain gages mounted on the incident and transmitted bars. The true stress-strain behavior of the sample was determined from equations based on wave propagation and dynamic force equilibrium. The experimental stress-strain response was three dimensional in nature because the specimen bulged. As such, the hydrostatic stress (first invariant) was used to generate the stress-strain response. In order to extract the uniaxial (one-dimensional) mechanical response of the tissue, an iterative coupled optimization was performed using experimental results and Finite Element Analysis (FEA), which contained an Internal State Variable (ISV) material model used for the tissue. The ISV material model used in the FE simulations of the experimental setup was iteratively calibrated (i.e. optimized) to the experimental data such that the experiment and FEA strain gage values and first invariant of stresses were in good agreement. PMID:26067742

  10. Importance of the ion-pair interactions in the OPEP coarse-grained force field: parametrization and validation.

    PubMed

    Sterpone, Fabio; Nguyen, Phuong H; Kalimeri, Maria; Derreumaux, Philippe

    2013-10-08

    We have derived new effective interactions that improve the description of ion-pairs in the OPEP coarse-grained force field without introducing explicit electrostatic terms. The iterative Boltzmann inversion method was used to extract these potentials from all atom simulations by targeting the radial distribution function of the distance between the center of mass of the side-chains. The new potentials have been tested on several systems that differ in structural properties, thermodynamic stabilities and number of ion-pairs. Our modeling, by refining the packing of the charged amino-acids, impacts the stability of secondary structure motifs and the population of intermediate states during temperature folding/unfolding; it also improves the aggregation propensity of peptides. The new version of the OPEP force field has the potentiality to describe more realistically a large spectrum of situations where salt-bridges are key interactions.

  11. Calibration of ultra-high frequency (UHF) partial discharge sensors using FDTD method

    NASA Astrophysics Data System (ADS)

    Ishak, Asnor Mazuan; Ishak, Mohd Taufiq

    2018-02-01

    Ultra-high frequency (UHF) partial discharge sensors are widely used for conditioning monitoring and defect location in insulation system of high voltage equipment. Designing sensors for specific applications often requires an iterative process of manufacturing, testing and mechanical modifications. This paper demonstrates the use of finite-difference time-domain (FDTD) technique as a tool to predict the frequency response of UHF PD sensors. Using this approach, the design process can be simplified and parametric studies can be conducted in order to assess the influence of component dimensions and material properties on the sensor response. The modelling approach is validated using gigahertz transverse electromagnetic (GTEM) calibration system. The use of a transient excitation source is particularly suitable for modeling using FDTD, which is able to simulate the step response output voltage of the sensor from which the frequency response is obtained using the same post-processing applied to the physical measurement.

  12. Self-guided method to search maximal Bell violations for unknown quantum states

    NASA Astrophysics Data System (ADS)

    Yang, Li-Kai; Chen, Geng; Zhang, Wen-Hao; Peng, Xing-Xiang; Yu, Shang; Ye, Xiang-Jun; Li, Chuan-Feng; Guo, Guang-Can

    2017-11-01

    In recent decades, a great variety of research and applications concerning Bell nonlocality have been developed with the advent of quantum information science. Providing that Bell nonlocality can be revealed by the violation of a family of Bell inequalities, finding maximal Bell violation (MBV) for unknown quantum states becomes an important and inevitable task during Bell experiments. In this paper we introduce a self-guided method to find MBVs for unknown states using a stochastic gradient ascent algorithm (SGA), by parametrizing the corresponding Bell operators. For three investigated systems (two qubit, three qubit, and two qutrit), this method can ascertain the MBV of general two-setting inequalities within 100 iterations. Furthermore, we prove SGA is also feasible when facing more complex Bell scenarios, e.g., d -setting d -outcome Bell inequality. Moreover, compared to other possible methods, SGA exhibits significant superiority in efficiency, robustness, and versatility.

  13. Modeling of capacitor charging dynamics in an energy harvesting system considering accurate electromechanical coupling effects

    NASA Astrophysics Data System (ADS)

    Bagheri, Shahriar; Wu, Nan; Filizadeh, Shaahin

    2018-06-01

    This paper presents an iterative numerical method that accurately models an energy harvesting system charging a capacitor with piezoelectric patches. The constitutive relations of piezoelectric materials connected with an external charging circuit with a diode bridge and capacitors lead to the electromechanical coupling effect and the difficulty of deriving accurate transient mechanical response, as well as the charging progress. The proposed model is built upon the Euler-Bernoulli beam theory and takes into account the electromechanical coupling effects as well as the dynamic process of charging an external storage capacitor. The model is validated through experimental tests on a cantilever beam coated with piezoelectric patches. Several parametric studies are performed and the functionality of the model is verified. The efficiency of power harvesting system can be predicted and tuned considering variations in different design parameters. Such a model can be utilized to design robust and optimal energy harvesting system.

  14. Self responses along cingulate cortex reveal quantitative neural phenotype for high functioning autism

    PubMed Central

    Chiu, Pearl H.; Kayali, M. Amin; Kishida, Kenneth T.; Tomlin, Damon; Klinger, Laura G.; Klinger, Mark R.; Montague, P. Read

    2014-01-01

    Summary Attributing behavioral outcomes correctly to oneself or to other agents is essential for all productive social exchange. We approach this issue in high-functioning males with autism spectrum disorder (ASD) using two separate fMRI paradigms. First, using a visual imagery task, we extract a basis set for responses along the cingulate cortex of control subjects that reveals an agent-specific eigenvector (self eigenmode) associated with imagining oneself executing a specific motor act. Second, we show that the same self eigenmode arises during one's own decision (the self phase) in an interpersonal exchange game (iterated trust game). Third, using this exchange game, we show that ASD males exhibit a severely diminished self eigenmode when playing the game with a human partner. This diminished response covaries parametrically with their behaviorally assessed symptom severity suggesting its value as an objective endophenotype. These findings may provide a quantitative assessment tool for high functioning ASD. PMID:18255038

  15. Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography

    NASA Astrophysics Data System (ADS)

    Chu, Pan; Lei, Jing

    2017-11-01

    The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.

  16. Simulations of roughness initiation and growth on railway rails

    NASA Astrophysics Data System (ADS)

    Sheng, X.; Thompson, D. J.; Jones, C. J. C.; Xie, G.; Iwnicki, S. D.; Allen, P.; Hsu, S. S.

    2006-06-01

    A model for the prediction of the initiation and growth of roughness on the rail is presented. The vertical interaction between a train and the track is calculated as a time history for single or multiple wheels moving on periodically supported rails, using a wavenumber-based approach. This vertical dynamic wheel/rail force arises from the varying stiffness due to discrete supports (i.e. parametric excitation) and the roughness excitation on the railhead. The tangential contact problem between the wheel and rail is modelled using an unsteady two-dimensional approach and also using the three-dimensional contact model, FASTSIM. This enables the slip and stick regions in the contact patch to be identified from the input geometry and creepage between the wheel and rail. The long-term wear growth is then predicted by applying repeated passages of the vehicle wheelsets, as part of an iterative solution.

  17. On the mechanical behaviours of a craze in particulate-polymer composites

    NASA Astrophysics Data System (ADS)

    Zhang, Y. M.; Zhang, W. G.; Fan, M.; Xiao, Z. M.

    2018-05-01

    In polymeric composites, well-defined inclusions are incorporated into the polymer matrix to alleviate the brittleness of polymers. When a craze is initiated in such a composite, the interaction between the craze and the surrounding inclusions will greatly affect the composite's mechanical behaviours and toughness. To the best knowledge of the authors, only little research work has been found so far on the interaction between a craze and the near-by inclusions in particulate-polymer composites. In the current study, the first time, the influences of the surrounding inclusions on the craze are investigated in particulate-polymer composites. The three-phase model is adopted to study the fracture behaviours of the craze affected by multiple inclusions. An iterative procedure is proposed to solve the stress intensity factors. Parametric studies are performed to investigate the influences of the reinforcing particle volume fraction and the shear modulus ratio on fracture behaviours of particulate-polymer composites.

  18. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  19. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  20. Segmentation-free statistical image reconstruction for polyenergetic x-ray computed tomography with experimental validation.

    PubMed

    Idris A, Elbakri; Fessler, Jeffrey A

    2003-08-07

    This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications.

  1. MRI-guided attenuation correction in whole-body PET/MR: assessment of the effect of bone attenuation.

    PubMed

    Akbarzadeh, A; Ay, M R; Ahmadian, A; Alam, N Riahi; Zaidi, H

    2013-02-01

    Hybrid PET/MRI presents many advantages in comparison with its counterpart PET/CT in terms of improved soft-tissue contrast, decrease in radiation exposure, and truly simultaneous and multi-parametric imaging capabilities. However, the lack of well-established methodology for MR-based attenuation correction is hampering further development and wider acceptance of this technology. We assess the impact of ignoring bone attenuation and using different tissue classes for generation of the attenuation map on the accuracy of attenuation correction of PET data. This work was performed using simulation studies based on the XCAT phantom and clinical input data. For the latter, PET and CT images of patients were used as input for the analytic simulation model using realistic activity distributions where CT-based attenuation correction was utilized as reference for comparison. For both phantom and clinical studies, the reference attenuation map was classified into various numbers of tissue classes to produce three (air, soft tissue and lung), four (air, lungs, soft tissue and cortical bones) and five (air, lungs, soft tissue, cortical bones and spongeous bones) class attenuation maps. The phantom studies demonstrated that ignoring bone increases the relative error by up to 6.8% in the body and up to 31.0% for bony regions. Likewise, the simulated clinical studies showed that the mean relative error reached 15% for lesions located in the body and 30.7% for lesions located in bones, when neglecting bones. These results demonstrate an underestimation of about 30% of tracer uptake when neglecting bone, which in turn imposes substantial loss of quantitative accuracy for PET images produced by hybrid PET/MRI systems. Considering bones in the attenuation map will considerably improve the accuracy of MR-guided attenuation correction in hybrid PET/MR to enable quantitative PET imaging on hybrid PET/MR technologies.

  2. Shape matters: The case for Ellipsoids and Ellipsoidal Water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tillack, Andreas F.; Robinson, Bruce H.

    We describe the shape potentials used for the van der Waals interactions between soft-ellipsoids used to coarse-grain molecular moieties in our Metropolis Monte-Carlo simulation software. The morphologies resulting from different expressions for these van der Waals interaction potentials are discussed for the case of a prolate spheroid system with a strong dipole at the ellipsoid center. We also show that the calculation of ellipsoids is, at worst, only about fivefold more expensive computationally when compared to a simple Lennard- Jones sphere. Finally, as an application of the ellipsoidal shape we parametrize water from the original SPC water model and observemore » – just through the difference in shape alone – a significant improvement of the O-O radial distribution function when compared to experimental data.« less

  3. Computational Analysis for Rocket-Based Combined-Cycle Systems During Rocket-Only Operation

    NASA Technical Reports Server (NTRS)

    Steffen, C. J., Jr.; Smith, T. D.; Yungster, S.; Keller, D. J.

    2000-01-01

    A series of Reynolds-averaged Navier-Stokes calculations were employed to study the performance of rocket-based combined-cycle systems operating in an all-rocket mode. This parametric series of calculations were executed within a statistical framework, commonly known as design of experiments. The parametric design space included four geometric and two flowfield variables set at three levels each, for a total of 729 possible combinations. A D-optimal design strategy was selected. It required that only 36 separate computational fluid dynamics (CFD) solutions be performed to develop a full response surface model, which quantified the linear, bilinear, and curvilinear effects of the six experimental variables. The axisymmetric, Reynolds-averaged Navier-Stokes simulations were executed with the NPARC v3.0 code. The response used in the statistical analysis was created from Isp efficiency data integrated from the 36 CFD simulations. The influence of turbulence modeling was analyzed by using both one- and two-equation models. Careful attention was also given to quantify the influence of mesh dependence, iterative convergence, and artificial viscosity upon the resulting statistical model. Thirteen statistically significant effects were observed to have an influence on rocket-based combined-cycle nozzle performance. It was apparent that the free-expansion process, directly downstream of the rocket nozzle, can influence the Isp efficiency. Numerical schlieren images and particle traces have been used to further understand the physical phenomena behind several of the statistically significant results.

  4. Research on respiratory motion correction method based on liver contrast-enhanced ultrasound images of single mode

    NASA Astrophysics Data System (ADS)

    Zhang, Ji; Li, Tao; Zheng, Shiqiang; Li, Yiyong

    2015-03-01

    To reduce the effects of respiratory motion in the quantitative analysis based on liver contrast-enhanced ultrasound (CEUS) image sequencesof single mode. The image gating method and the iterative registration method using model image were adopted to register liver contrast-enhanced ultrasound image sequences of single mode. The feasibility of the proposed respiratory motion correction method was explored preliminarily using 10 hepatocellular carcinomas CEUS cases. The positions of the lesions in the time series of 2D ultrasound images after correction were visually evaluated. Before and after correction, the quality of the weighted sum of transit time (WSTT) parametric images were also compared, in terms of the accuracy and spatial resolution. For the corrected and uncorrected sequences, their mean deviation values (mDVs) of time-intensity curve (TIC) fitting derived from CEUS sequences were measured. After the correction, the positions of the lesions in the time series of 2D ultrasound images were almost invariant. In contrast, the lesions in the uncorrected images all shifted noticeably. The quality of the WSTT parametric maps derived from liver CEUS image sequences were improved more greatly. Moreover, the mDVs of TIC fitting derived from CEUS sequences after the correction decreased by an average of 48.48+/-42.15. The proposed correction method could improve the accuracy of quantitative analysis based on liver CEUS image sequences of single mode, which would help in enhancing the differential diagnosis efficiency of liver tumors.

  5. Design and Parametric Sizing of Deep Space Habitats Supporting NASA'S Human Space Flight Architecture Team

    NASA Technical Reports Server (NTRS)

    Toups, Larry; Simon, Matthew; Smitherman, David; Spexarth, Gary

    2012-01-01

    NASA's Human Space Flight Architecture Team (HAT) is a multi-disciplinary, cross-agency study team that conducts strategic analysis of integrated development approaches for human and robotic space exploration architectures. During each analysis cycle, HAT iterates and refines the definition of design reference missions (DRMs), which inform the definition of a set of integrated capabilities required to explore multiple destinations. An important capability identified in this capability-driven approach is habitation, which is necessary for crewmembers to live and work effectively during long duration transits to and operations at exploration destinations beyond Low Earth Orbit (LEO). This capability is captured by an element referred to as the Deep Space Habitat (DSH), which provides all equipment and resources for the functions required to support crew safety, health, and work including: life support, food preparation, waste management, sleep quarters, and housekeeping.The purpose of this paper is to describe the design of the DSH capable of supporting crew during exploration missions. First, the paper describes the functionality required in a DSH to support the HAT defined exploration missions, the parameters affecting its design, and the assumptions used in the sizing of the habitat. Then, the process used for arriving at parametric sizing estimates to support additional HAT analyses is detailed. Finally, results from the HAT Cycle C DSH sizing are presented followed by a brief description of the remaining design trades and technological advancements necessary to enable the exploration habitation capability.

  6. Application of a relativistic accretion disc model to X-ray spectra of LMC X-1 and GRO J1655-40

    NASA Astrophysics Data System (ADS)

    Gierliński, Marek; Maciołek-Niedźwiecki, Andrzej; Ebisawa, Ken

    2001-08-01

    We present a general relativistic accretion disc model and its application to the soft-state X-ray spectra of black hole binaries. The model assumes a flat, optically thick disc around a rotating Kerr black hole. The disc locally radiates away the dissipated energy as a blackbody. Special and general relativistic effects influencing photons emitted by the disc are taken into account. The emerging spectrum, as seen by a distant observer, is parametrized by the black hole mass and spin, the accretion rate, the disc inclination angle and the inner disc radius. We fit the ASCA soft-state X-ray spectra of LMC X-1 and GRO J1655-40 by this model. We find that, having additional limits on the black hole mass and inclination angle from optical/UV observations, we can constrain the black hole spin from X-ray data. In LMC X-1 the constraint is weak, and we can only rule out the maximally rotating black hole. In GRO J1655-40 we can limit the spin much better, and we find 0.68<=a<=0.88. Accretion discs in both sources are radiation-pressure dominated. We do not find Compton reflection features in the spectra of any of these objects.

  7. Spectral CT of the extremities with a silicon strip photon counting detector

    NASA Astrophysics Data System (ADS)

    Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Siewerdsen, J. H.

    2015-03-01

    Purpose: Photon counting x-ray detectors (PCXDs) are an important emerging technology for spectral imaging and material differentiation with numerous potential applications in diagnostic imaging. We report development of a Si-strip PCXD system originally developed for mammography with potential application to spectral CT of musculoskeletal extremities, including challenges associated with sparse sampling, spectral calibration, and optimization for higher energy x-ray beams. Methods: A bench-top CT system was developed incorporating a Si-strip PCXD, fixed anode x-ray source, and rotational and translational motions to execute complex acquisition trajectories. Trajectories involving rotation and translation combined with iterative reconstruction were investigated, including single and multiple axial scans and longitudinal helical scans. The system was calibrated to provide accurate spectral separation in dual-energy three-material decomposition of soft-tissue, bone, and iodine. Image quality and decomposition accuracy were assessed in experiments using a phantom with pairs of bone and iodine inserts (3, 5, 15 and 20 mm) and an anthropomorphic wrist. Results: The designed trajectories improved the sampling distribution from 56% minimum sampling of voxels to 75%. Use of iterative reconstruction (viz., penalized likelihood with edge preserving regularization) in combination with such trajectories resulted in a very low level of artifacts in images of the wrist. For large bone or iodine inserts (>5 mm diameter), the error in the estimated material concentration was <16% for (50 mg/mL) bone and <8% for (5 mg/mL) iodine with strong regularization. For smaller inserts, errors of 20-40% were observed and motivate improved methods for spectral calibration and optimization of the edge-preserving regularizer. Conclusion: Use of PCXDs for three-material decomposition in joint imaging proved feasible through a combination of rotation-translation acquisition trajectories and iterative reconstruction with optimized regularization.

  8. Limb muscle sound speed estimation by ultrasound computed tomography excluding receivers in bone shadow

    NASA Astrophysics Data System (ADS)

    Qu, Xiaolei; Azuma, Takashi; Lin, Hongxiang; Takeuchi, Hideki; Itani, Kazunori; Tamano, Satoshi; Takagi, Shu; Sakuma, Ichiro

    2017-03-01

    Sarcopenia is the degenerative loss of skeletal muscle ability associated with aging. One reason is the increasing of adipose ratio of muscle, which can be estimated by the speed of sound (SOS), since SOSs of muscle and adipose are different (about 7%). For SOS imaging, the conventional bent-ray method iteratively finds ray paths and corrects SOS along them by travel-time. However, the iteration is difficult to converge for soft tissue with bone inside, because of large speed variation. In this study, the bent-ray method is modified to produce SOS images for limb muscle with bone inside. The modified method includes three steps. First, travel-time is picked up by a proposed Akaike Information Criterion (AIC) with energy term (AICE) method. The energy term is employed for detecting and abandoning the transmissive wave through bone (low energy wave). It results in failed reconstruction for bone, but makes iteration convergence and gives correct SOS for skeletal muscle. Second, ray paths are traced using Fermat's principle. Finally, simultaneous algebraic reconstruction technique (SART) is employed to correct SOS along ray paths, but excluding paths with low energy wave which may pass through bone. The simulation evaluation was implemented by k-wave toolbox using a model of upper arm. As the result, SOS of muscle was 1572.0+/-7.3 m/s, closing to 1567.0 m/s in the model. For vivo evaluation, a ring transducer prototype was employed to scan the cross sections of lower arm and leg of a healthy volunteer. And the skeletal muscle SOSs were 1564.0+/-14.8 m/s and 1564.1±18.0 m/s, respectively.

  9. Direct Reconstruction of CT-Based Attenuation Correction Images for PET With Cluster-Based Penalties

    NASA Astrophysics Data System (ADS)

    Kim, Soo Mee; Alessio, Adam M.; De Man, Bruno; Kinahan, Paul E.

    2017-03-01

    Extremely low-dose (LD) CT acquisitions used for PET attenuation correction have high levels of noise and potential bias artifacts due to photon starvation. This paper explores the use of a priori knowledge for iterative image reconstruction of the CT-based attenuation map. We investigate a maximum a posteriori framework with cluster-based multinomial penalty for direct iterative coordinate decent (dICD) reconstruction of the PET attenuation map. The objective function for direct iterative attenuation map reconstruction used a Poisson log-likelihood data fit term and evaluated two image penalty terms of spatial and mixture distributions. The spatial regularization is based on a quadratic penalty. For the mixture penalty, we assumed that the attenuation map may consist of four material clusters: air + background, lung, soft tissue, and bone. Using simulated noisy sinogram data, dICD reconstruction was performed with different strengths of the spatial and mixture penalties. The combined spatial and mixture penalties reduced the root mean squared error (RMSE) by roughly two times compared with a weighted least square and filtered backprojection reconstruction of CT images. The combined spatial and mixture penalties resulted in only slightly lower RMSE compared with a spatial quadratic penalty alone. For direct PET attenuation map reconstruction from ultra-LD CT acquisitions, the combination of spatial and mixture penalties offers regularization of both variance and bias and is a potential method to reconstruct attenuation maps with negligible patient dose. The presented results, using a best-case histogram suggest that the mixture penalty does not offer a substantive benefit over conventional quadratic regularization and diminishes enthusiasm for exploring future application of the mixture penalty.

  10. Isolated attosecond pulses in the water window

    NASA Astrophysics Data System (ADS)

    Chang, Zenghu

    Millijoule level, few-cycle, carrier-envelope phase (CEP) stable Ti:Sapphire lasers have been the workhorse for the first generation attosecond light sources in the last decade. The spectral range of isolated attosecond pulses with sufficient photon flux for time-resolved pump-probe experiments has been limited to extreme ultraviolet (10 to 150 eV). The shortest pulses achieved are 67 as. The center wavelength of Ti:Sapphire lasers is 800 nm. It was demonstrated in 2001 that the cutoff photon energy of the high harmonic spectrum can be extended by increasing the center wavelength of the driving lasers. In recent years, mJ level, two-cycle, carrier-envelope phase stabilized lasers at 1.6 to 2.1 micron have been developed by compressing pulses from Optical Parametric Amplifiers with gas-filled hollow-core fibers or by implementing Optical Parametric Chirped Pulse Amplification (OPCPA) techniques. Recently, when long wavelength driving was combined with polarization gating, isolated soft x-rays in the water window (280-530 eV) were generated in our laboratory. The number of x-ray photons in the 120-400 eV range is comparable to that generated with Ti:Sapphire lasers in the 50 to 150 eV range. The yield of harmonic generation depends strongly on the ellipticity of the driving fields, which is the foundation of polarization gating. When the width of the gate was set to less than one half of the laser cycle, a soft x-ray supercontinuum was generated. The intensity of the gated x-ray spectrum is sensitive to the carrier-envelope phase of the driving laser, which indicates that single isolated attosecond pulses were generated. The ultrabroadband isolated x-ray pulses with 53 as duration were characterized by attosecond streaking measurements. This work has been supported by the DARPA PULSE program (W31P4Q1310017); the Army Research Office (W911NF-14-1-0383, W911NF-15-1- 0336); the Air Force Office of Scientific Research (FA9550-15-1-0037, FA9550-16-1-0149), and NSF 1506345.

  11. Dense soft tissue 3D reconstruction refined with super-pixel segmentation for robotic abdominal surgery.

    PubMed

    Penza, Veronica; Ortiz, Jesús; Mattos, Leonardo S; Forgione, Antonello; De Momi, Elena

    2016-02-01

    Single-incision laparoscopic surgery decreases postoperative infections, but introduces limitations in the surgeon's maneuverability and in the surgical field of view. This work aims at enhancing intra-operative surgical visualization by exploiting the 3D information about the surgical site. An interactive guidance system is proposed wherein the pose of preoperative tissue models is updated online. A critical process involves the intra-operative acquisition of tissue surfaces. It can be achieved using stereoscopic imaging and 3D reconstruction techniques. This work contributes to this process by proposing new methods for improved dense 3D reconstruction of soft tissues, which allows a more accurate deformation identification and facilitates the registration process. Two methods for soft tissue 3D reconstruction are proposed: Method 1 follows the traditional approach of the block matching algorithm. Method 2 performs a nonparametric modified census transform to be more robust to illumination variation. The simple linear iterative clustering (SLIC) super-pixel algorithm is exploited for disparity refinement by filling holes in the disparity images. The methods were validated using two video datasets from the Hamlyn Centre, achieving an accuracy of 2.95 and 1.66 mm, respectively. A comparison with ground-truth data demonstrated the disparity refinement procedure: (1) increases the number of reconstructed points by up to 43 % and (2) does not affect the accuracy of the 3D reconstructions significantly. Both methods give results that compare favorably with the state-of-the-art methods. The computational time constraints their applicability in real time, but can be greatly improved by using a GPU implementation.

  12. Concept for a beryllium divertor with in-situ plasma spray surface regeneration

    NASA Astrophysics Data System (ADS)

    Smith, M. F.; Watson, R. D.; McGrath, R. T.; Croessmann, C. D.; Whitley, J. B.; Causey, R. A.

    1990-04-01

    Two serious problems with the use of graphite tiles on the ITER divertor are the limited lifetime due to erosion and the difficulty of replacing broken tiles inside the machine. Beryllium is proposed as an alternative low-Z armor material because the plasma spray process can be used to make in-situ repairs of eroded or damaged surfaces. Recent advances in plasma spray technology have produced beryllium coatings of 98% density with a 95% deposition efficiency and strong adhesion to the substrate. With existing technology, the entire active region of the ITER divertor surface could be coated with 2 mm of beryllium in less than 15 h using four small plasma spray guns. Beryllium also has other potential advantages over graphite, e.g., efficient gettering of oxygen, ten times less tritium inventory, reduced problems of transient fueling from D/T exchange and release, no runaway erosion cascades from self-sputtering, better adhesion of redeposited material, as well as higher strength, ductility, and fracture toughness than graphite. A 2-D finite element stress analysis was performed on a 3 mm thick Be tile brazed to an OFHC soft-copper saddle block, which was brazed to a high-strength copper tube. Peak stresses remained 50% below the ultimate strength for both brazing and in-service thermal stresses.

  13. CT coronary angiography: impact of adapted statistical iterative reconstruction (ASIR) on coronary stenosis and plaque composition analysis.

    PubMed

    Fuchs, Tobias A; Fiechter, Michael; Gebhard, Cathérine; Stehli, Julia; Ghadri, Jelena R; Kazakauskaite, Egle; Herzog, Bernhard A; Husmann, Lars; Gaemperli, Oliver; Kaufmann, Philipp A

    2013-03-01

    To assess the impact of adaptive statistical iterative reconstruction (ASIR) on coronary plaque volume and composition analysis as well as on stenosis quantification in high definition coronary computed tomography angiography (CCTA). We included 50 plaques in 29 consecutive patients who were referred for the assessment of known or suspected coronary artery disease (CAD) with contrast-enhanced CCTA on a 64-slice high definition CT scanner (Discovery HD 750, GE Healthcare). CCTA scans were reconstructed with standard filtered back projection (FBP) with no ASIR (0 %) or with increasing contributions of ASIR, i.e. 20, 40, 60, 80 and 100 % (no FBP). Plaque analysis (volume, components and stenosis degree) was performed using a previously validated automated software. Mean values for minimal diameter and minimal area as well as degree of stenosis did not change significantly using different ASIR reconstructions. There was virtually no impact of reconstruction algorithms on mean plaque volume or plaque composition (e.g. soft, intermediate and calcified component). However, with increasing ASIR contribution, the percentage of plaque volume component between 401 and 500 HU decreased significantly (p < 0.05). Modern image reconstruction algorithms such as ASIR, which has been developed for noise reduction in latest high resolution CCTA scans, can be used reliably without interfering with the plaque analysis and stenosis severity assessment.

  14. A coupled cluster theory with iterative inclusion of triple excitations and associated equation of motion formulation for excitation energy and ionization potential

    NASA Astrophysics Data System (ADS)

    Maitra, Rahul; Akinaga, Yoshinobu; Nakajima, Takahito

    2017-08-01

    A single reference coupled cluster theory that is capable of including the effect of connected triple excitations has been developed and implemented. This is achieved by regrouping the terms appearing in perturbation theory and parametrizing through two different sets of exponential operators: while one of the exponentials, involving general substitution operators, annihilates the ground state but has a non-vanishing effect when it acts on the excited determinant, the other is the regular single and double excitation operator in the sense of conventional coupled cluster theory, which acts on the Hartree-Fock ground state. The two sets of operators are solved as coupled non-linear equations in an iterative manner without significant increase in computational cost than the conventional coupled cluster theory with singles and doubles excitations. A number of physically motivated and computationally advantageous sufficiency conditions are invoked to arrive at the working equations and have been applied to determine the ground state energies of a number of small prototypical systems having weak multi-reference character. With the knowledge of the correlated ground state, we have reconstructed the triple excitation operator and have performed equation of motion with coupled cluster singles, doubles, and triples to obtain the ionization potential and excitation energies of these molecules as well. Our results suggest that this is quite a reasonable scheme to capture the effect of connected triple excitations as long as the ground state remains weakly multi-reference.

  15. A filament of dark matter between two clusters of galaxies.

    PubMed

    Dietrich, Jörg P; Werner, Norbert; Clowe, Douglas; Finoguenov, Alexis; Kitching, Tom; Miller, Lance; Simionescu, Aurora

    2012-07-12

    It is a firm prediction of the concordance cold-dark-matter cosmological model that galaxy clusters occur at the intersection of large-scale structure filaments. The thread-like structure of this 'cosmic web' has been traced by galaxy redshift surveys for decades. More recently, the warm–hot intergalactic medium (a sparse plasma with temperatures of 10(5) kelvin to 10(7) kelvin) residing in low-redshift filaments has been observed in emission and absorption. However, a reliable direct detection of the underlying dark-matter skeleton, which should contain more than half of all matter, has remained elusive, because earlier candidates for such detections were either falsified or suffered from low signal-to-noise ratios and unphysical misalignments of dark and luminous matter. Here we report the detection of a dark-matter filament connecting the two main components of the Abell 222/223 supercluster system from its weak gravitational lensing signal, both in a non-parametric mass reconstruction and in parametric model fits. This filament is coincident with an overdensity of galaxies and diffuse, soft-X-ray emission, and contributes a mass comparable to that of an additional galaxy cluster to the total mass of the supercluster. By combining this result with X-ray observations, we can place an upper limit of 0.09 on the hot gas fraction (the mass of X-ray-emitting gas divided by the total mass) in the filament.

  16. Analytical solution of concentric two-pole Halbach cylinders as a preliminary design tool for magnetic refrigeration systems

    NASA Astrophysics Data System (ADS)

    Fortkamp, F. P.; Lozano, J. A.; Barbosa, J. R.

    2017-12-01

    This work presents a parametric analysis of the performance of nested permanent magnet Halbach cylinders intended for applications in magnetic refrigeration and heat pumping. An analytical model for the magnetic field generated by the cylinders is used to systematically investigate the influence of their geometric parameters. The proposed configuration generates two poles in the air gap between the cylinders, where active magnetic regenerators are positioned for conversion of magnetic work into cooling capacity or heat power. A sample geometry based on previous designs of magnetic refrigerators is investigated, and the results show that the magnetic field in the air gap oscillates between 0 to approximately 1 T, forming a rectified cosine profile along the circumference of the gap. Calculations of the energy density of the magnets indicate the need to operate at a low energy (particular the inner cylinder) in order to generate a magnetic profile suitable for a magnetic cooler. In practice, these low-energy regions of the magnet can be potentially replaced by soft ferromagnetic material. A parametric analysis of the air gap height has been performed, showing that there are optimal values which maximize the magnet efficiency parameter Λcool . Some combinations of cylinder radii resulted in magnetic field changes that were too small for practical purposes. No demagnetization of the cylinders has been found for the range of parameters considered.

  17. Nonlinear dynamic phenomena in the space shuttle thermal protection system

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Edighoffer, H. H.; Park, K. C.

    1981-01-01

    The development of an analysis for examining the nonlinear dynamic phenomena arising in the space shuttle orbiter tile/pad thermal protection system is presented. The tile/pad system consists of ceramic tiles bonded to the aluminum skin of the orbiter through a thin nylon felt pad. The pads are a soft nonlinear material which permits large strains and displays both hysteretic and nonlinear viscous damping. Application of the analysis to a square tile subjected to transverse sinusoidal motion of the orbiter skin is presented and the following nonlinear dynamic phenomena are considered: highly distorted wave forms, amplitude-dependent resonant frequencies which initially decrease and then increase with increasing amplitude of motion, magnification of substrate motion which is higher than would be expected in a similarly highly damped linear system, and classical parametric resonance instability.

  18. BaHigh-force magnetic tweezers with force feedback for biological applications

    NASA Astrophysics Data System (ADS)

    Kollmannsberger, Philip; Fabry, Ben

    2007-11-01

    Magnetic micromanipulation using magnetic tweezers is a versatile biophysical technique and has been used for single-molecule unfolding, rheology measurements, and studies of force-regulated processes in living cells. This article describes an inexpensive magnetic tweezer setup for the application of precisely controlled forces up to 100nN onto 5μm magnetic beads. High precision of the force is achieved by a parametric force calibration method together with a real-time control of the magnetic tweezer position and current. High forces are achieved by bead-magnet distances of only a few micrometers. Applying such high forces can be used to characterize the local viscoelasticity of soft materials in the nonlinear regime, or to study force-regulated processes and mechanochemical signal transduction in living cells. The setup can be easily adapted to any inverted microscope.

  19. High-force magnetic tweezers with force feedback for biological applications.

    PubMed

    Kollmannsberger, Philip; Fabry, Ben

    2007-11-01

    Magnetic micromanipulation using magnetic tweezers is a versatile biophysical technique and has been used for single-molecule unfolding, rheology measurements, and studies of force-regulated processes in living cells. This article describes an inexpensive magnetic tweezer setup for the application of precisely controlled forces up to 100 nN onto 5 microm magnetic beads. High precision of the force is achieved by a parametric force calibration method together with a real-time control of the magnetic tweezer position and current. High forces are achieved by bead-magnet distances of only a few micrometers. Applying such high forces can be used to characterize the local viscoelasticity of soft materials in the nonlinear regime, or to study force-regulated processes and mechanochemical signal transduction in living cells. The setup can be easily adapted to any inverted microscope.

  20. On optimal soft-decision demodulation. [in digital communication system

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1976-01-01

    A necessary condition is derived for optimal J-ary coherent demodulation of M-ary (M greater than 2) signals. Optimality is defined as maximality of the symmetric cutoff rate of the resulting discrete memoryless channel. Using a counterexample, it is shown that the condition derived is generally not sufficient for optimality. This condition is employed as the basis for an iterative optimization method to find the optimal demodulator decision regions from an initial 'good guess'. In general, these regions are found to be bounded by hyperplanes in likelihood space; the corresponding regions in signal space are found to have hyperplane asymptotes for the important case of additive white Gaussian noise. Some examples are presented, showing that the regions in signal space bounded by these asymptotic hyperplanes define demodulator decision regions that are virtually optimal.

  1. On a gas electron multiplier based synthetic diagnostic for soft x-ray tomography on WEST with focus on impurity transport studies

    NASA Astrophysics Data System (ADS)

    Jardin, A.; Mazon, D.; Malard, P.; O'Mullane, M.; Chernyshova, M.; Czarski, T.; Malinowski, K.; Kasprowicz, G.; Wojenski, A.; Pozniak, K.

    2017-08-01

    The tokamak WEST aims at testing ITER divertor high heat flux component technology in long pulse operation. Unfortunately, heavy impurities like tungsten (W) sputtered from the plasma facing components can pollute the plasma core by radiation cooling in the soft x-ray (SXR) range, which is detrimental for the energy confinement and plasma stability. SXR diagnostics give valuable information to monitor impurities and study their transport. The WEST SXR diagnostic is composed of two new cameras based on the Gas Electron Multiplier (GEM) technology. The WEST GEM cameras will be used for impurity transport studies by performing 2D tomographic reconstructions with spectral resolution in tunable energy bands. In this paper, we characterize the GEM spectral response and investigate W density reconstruction thanks to a synthetic diagnostic recently developed and coupled with a tomography algorithm based on the minimum Fisher information (MFI) inversion method. The synthetic diagnostic includes the SXR source from a given plasma scenario, the photoionization, electron cloud transport and avalanche in the detection volume using Magboltz, and tomographic reconstruction of the radiation from the GEM signal. Preliminary studies of the effect of transport on the W ionization equilibrium and on the reconstruction capabilities are also presented.

  2. Coded excitation speeds up the detection of the fundamental flexural guided wave in coated tubes

    NASA Astrophysics Data System (ADS)

    Song, Xiaojun; Moilanen, Petro; Zhao, Zuomin; Ta, Dean; Pirhonen, Jalmari; Salmi, Ari; Hæeggström, Edward; Myllylä, Risto; Timonen, Jussi; Wang, Weiqi

    2016-09-01

    The fundamental flexural guided wave (FFGW) permits ultrasonic assessment of the wall thickness of solid waveguides, such as tubes or, e.g., long cortical bones. Recently, an optical non-contact method was proposed for ultrasound excitation and detection with the aim of facilitating the FFGW reception by suppressing the interfering modes from the soft coating. This technique suffers from low SNR and requires iterative physical scanning across the source-receiver distance for 2D-FFT analysis. This means that SNR improvement achieved by temporal averaging becomes time-consuming (several minutes) which reduces the applicability of the technique, especially in time-critical applications such as clinical quantitative ultrasound. To achieve sufficient SNR faster, an ultrasonic excitation by a base-sequence-modulated Golay code (BSGC, 64-bit code pair) on coated tube samples (1-5 mm wall thickness and 5 mm soft coating layer) was used. This approach improved SNR by 21 dB and speeded up the measurement by a factor of 100 compared to using a classical pulse excitation with temporal averaging. The measurement now took seconds instead of minutes, while the ability to determine the wall thickness of the phantoms was maintained. The technique thus allows rapid noncontacting assessment of the wall thickness in coated solid tubes, such as the human bone.

  3. Spectral broadening measurement of the lower hybrid waves during long pulse operation in Tore Supra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berger-By, G.; Decampy, J.; Goniche, M.

    2014-02-12

    On many tokamaks (C-Mod, EAST, FTU, JET, HT-7, TS), a decrease in current drive efficiency of the Lower Hybrid (LH) waves is observed in high electron density plasmas. The cause of this behaviour is believed to be: Parametric Instabilities (PI) and Scattering from Density Fluctuations (SDF). For the ITER LH system, our knowledge must be improved to avoid such effects and to maintain the LH current drive efficiency at high density. The ITPA IOS group coordinates this effort [1] and all experimental data are essential to validate the numerical codes in progress. Usually the broadening of the LH wave frequencymore » spectrum is measured by a probe located in the plasma edge. For this study, the frequency spectrum of a reflected power signal from the LH antenna was used. In addition, the spectrum measurements are compared with the density fluctuations observed on RF probes located at the antenna mouth. Several plasma currents (0.6 to 1.4 MA) and densities up to 5.2 × 10{sup 19} m−3 have been realised on Tore Supra (TS) long pulses and with high injected RF power, up to 5.4 MW-30s. This allowed using a spectrum analyser to make several measurements during the plasma pulse. The side lobe amplitude, shifted by 20-30MHz with respect to the main peak, grows with increasing density. Furthermore, for an increase of plasma current at the same density, the spectra broaden and become asymmetric. Some parametric dependencies are shown in this paper.« less

  4. Parametrization of Backbone Flexibility in a Coarse-Grained Force Field for Proteins (COFFDROP) Derived from All-Atom Explicit-Solvent Molecular Dynamics Simulations of All Possible Two-Residue Peptides.

    PubMed

    Frembgen-Kesner, Tamara; Andrews, Casey T; Li, Shuxiang; Ngo, Nguyet Anh; Shubert, Scott A; Jain, Aakash; Olayiwola, Oluwatoni J; Weishaar, Mitch R; Elcock, Adrian H

    2015-05-12

    Recently, we reported the parametrization of a set of coarse-grained (CG) nonbonded potential functions, derived from all-atom explicit-solvent molecular dynamics (MD) simulations of amino acid pairs and designed for use in (implicit-solvent) Brownian dynamics (BD) simulations of proteins; this force field was named COFFDROP (COarse-grained Force Field for Dynamic Representations Of Proteins). Here, we describe the extension of COFFDROP to include bonded backbone terms derived from fitting to results of explicit-solvent MD simulations of all possible two-residue peptides containing the 20 standard amino acids, with histidine modeled in both its protonated and neutral forms. The iterative Boltzmann inversion (IBI) method was used to optimize new CG potential functions for backbone-related terms by attempting to reproduce angle, dihedral, and distance probability distributions generated by the MD simulations. In a simple test of the transferability of the extended force field, the angle, dihedral, and distance probability distributions obtained from BD simulations of 56 three-residue peptides were compared to results from corresponding explicit-solvent MD simulations. In a more challenging test of the COFFDROP force field, it was used to simulate eight intrinsically disordered proteins and was shown to quite accurately reproduce the experimental hydrodynamic radii (Rhydro), provided that the favorable nonbonded interactions of the force field were uniformly scaled downward in magnitude. Overall, the results indicate that the COFFDROP force field is likely to find use in modeling the conformational behavior of intrinsically disordered proteins and multidomain proteins connected by flexible linkers.

  5. A note on the WGC, effective field theory and clockwork within string theory

    NASA Astrophysics Data System (ADS)

    Ibáñez, Luis E.; Montero, Miguel

    2018-02-01

    It has been recently argued that Higgsing of theories with U(1) n gauge interactions consistent with the Weak Gravity Conjecture (WGC) may lead to effective field theories parametrically violating WGC constraints. The minimal examples typically involve Higgs scalars with a large charge with respect to a U(1) (e.g. charges ( Z, 1) in U(1)2 with Z ≫ 1). This type of Higgs multiplets play also a key role in clockwork U(1) theories. We study these issues in the context of heterotic string theory and find that, even if there is no new physics at the standard magnetic WGC scale Λ ˜ g IR M P , the string scale is just slightly above, at a scale ˜ √{k_{IR}}Λ. Here k IR is the level of the IR U(1) worldsheet current. We show that, unlike the standard magnetic cutoff, this bound is insensitive to subsequent Higgsing. One may argue that this constraint gives rise to no bound at the effective field theory level since k IR is model dependent and in general unknown. However there is an additional constraint to be taken into account, which is that the Higgsing scalars with large charge Z should be part of the string massless spectrum, which becomes an upper bound k IR ≤ k 0 2 , where k 0 is the level of the UV currents. Thus, for fixed k 0, Z cannot be made parametrically large. The upper bound on the charges Z leads to limitations on the size and structure of hierarchies in an iterated U(1) clockwork mechanism.

  6. Spectral broadening measurement of the lower hybrid waves during long pulse operation in Tore Supra

    NASA Astrophysics Data System (ADS)

    Berger-By, G.; Decampy, J.; Antar, G. Y.; Goniche, M.; Ekedahl, A.; Delpech, L.; Leroux, F.; Tore Supra Team

    2014-02-01

    On many tokamaks (C-Mod, EAST, FTU, JET, HT-7, TS), a decrease in current drive efficiency of the Lower Hybrid (LH) waves is observed in high electron density plasmas. The cause of this behaviour is believed to be: Parametric Instabilities (PI) and Scattering from Density Fluctuations (SDF). For the ITER LH system, our knowledge must be improved to avoid such effects and to maintain the LH current drive efficiency at high density. The ITPA IOS group coordinates this effort [1] and all experimental data are essential to validate the numerical codes in progress. Usually the broadening of the LH wave frequency spectrum is measured by a probe located in the plasma edge. For this study, the frequency spectrum of a reflected power signal from the LH antenna was used. In addition, the spectrum measurements are compared with the density fluctuations observed on RF probes located at the antenna mouth. Several plasma currents (0.6 to 1.4 MA) and densities up to 5.2 × 1019 m-3 have been realised on Tore Supra (TS) long pulses and with high injected RF power, up to 5.4 MW-30s. This allowed using a spectrum analyser to make several measurements during the plasma pulse. The side lobe amplitude, shifted by 20-30MHz with respect to the main peak, grows with increasing density. Furthermore, for an increase of plasma current at the same density, the spectra broaden and become asymmetric. Some parametric dependencies are shown in this paper.

  7. Finite element methods for the biomechanics of soft hydrated tissues: nonlinear analysis and adaptive control of meshes.

    PubMed

    Spilker, R L; de Almeida, E S; Donzelli, P S

    1992-01-01

    This chapter addresses computationally demanding numerical formulations in the biomechanics of soft tissues. The theory of mixtures can be used to represent soft hydrated tissues in the human musculoskeletal system as a two-phase continuum consisting of an incompressible solid phase (collagen and proteoglycan) and an incompressible fluid phase (interstitial water). We first consider the finite deformation of soft hydrated tissues in which the solid phase is represented as hyperelastic. A finite element formulation of the governing nonlinear biphasic equations is presented based on a mixed-penalty approach and derived using the weighted residual method. Fluid and solid phase deformation, velocity, and pressure are interpolated within each element, and the pressure variables within each element are eliminated at the element level. A system of nonlinear, first-order differential equations in the fluid and solid phase deformation and velocity is obtained. In order to solve these equations, the contributions of the hyperelastic solid phase are incrementally linearized, a finite difference rule is introduced for temporal discretization, and an iterative scheme is adopted to achieve equilibrium at the end of each time increment. We demonstrate the accuracy and adequacy of the procedure using a six-node, isoparametric axisymmetric element, and we present an example problem for which independent numerical solution is available. Next, we present an automated, adaptive environment for the simulation of soft tissue continua in which the finite element analysis is coupled with automatic mesh generation, error indicators, and projection methods. Mesh generation and updating, including both refinement and coarsening, for the two-dimensional examples examined in this study are performed using the finite quadtree approach. The adaptive analysis is based on an error indicator which is the L2 norm of the difference between the finite element solution and a projected finite element solution. Total stress, calculated as the sum of the solid and fluid phase stresses, is used in the error indicator. To allow the finite difference algorithm to proceed in time using an updated mesh, solution values must be transferred to the new nodal locations. This rezoning is accomplished using a projected field for the primary variables. The accuracy and effectiveness of this adaptive finite element analysis is demonstrated using a linear, two-dimensional, axisymmetric problem corresponding to the indentation of a thin sheet of soft tissue. The method is shown to effectively capture the steep gradients and to produce solutions in good agreement with independent, converged, numerical solutions.

  8. Towards the low-dose characterization of beam sensitive nanostructures via implementation of sparse image acquisition in scanning transmission electron microscopy

    NASA Astrophysics Data System (ADS)

    Hwang, Sunghwan; Han, Chang Wan; Venkatakrishnan, Singanallur V.; Bouman, Charles A.; Ortalan, Volkan

    2017-04-01

    Scanning transmission electron microscopy (STEM) has been successfully utilized to investigate atomic structure and chemistry of materials with atomic resolution. However, STEM’s focused electron probe with a high current density causes the electron beam damages including radiolysis and knock-on damage when the focused probe is exposed onto the electron-beam sensitive materials. Therefore, it is highly desirable to decrease the electron dose used in STEM for the investigation of biological/organic molecules, soft materials and nanomaterials in general. With the recent emergence of novel sparse signal processing theories, such as compressive sensing and model-based iterative reconstruction, possibilities of operating STEM under a sparse acquisition scheme to reduce the electron dose have been opened up. In this paper, we report our recent approach to implement a sparse acquisition in STEM mode executed by a random sparse-scan and a signal processing algorithm called model-based iterative reconstruction (MBIR). In this method, a small portion, such as 5% of randomly chosen unit sampling areas (i.e. electron probe positions), which corresponds to pixels of a STEM image, within the region of interest (ROI) of the specimen are scanned with an electron probe to obtain a sparse image. Sparse images are then reconstructed using the MBIR inpainting algorithm to produce an image of the specimen at the original resolution that is consistent with an image obtained using conventional scanning methods. Experimental results for down to 5% sampling show consistency with the full STEM image acquired by the conventional scanning method. Although, practical limitations of the conventional STEM instruments, such as internal delays of the STEM control electronics and the continuous electron gun emission, currently hinder to achieve the full potential of the sparse acquisition STEM in realizing the low dose imaging condition required for the investigation of beam-sensitive materials, the results obtained in our experiments demonstrate the sparse acquisition STEM imaging is potentially capable of reducing the electron dose by at least 20 times expanding the frontiers of our characterization capabilities for investigation of biological/organic molecules, polymers, soft materials and nanostructures in general.

  9. A geometric viewpoint on generalized hydrodynamics

    NASA Astrophysics Data System (ADS)

    Doyon, Benjamin; Spohn, Herbert; Yoshimura, Takato

    2018-01-01

    Generalized hydrodynamics (GHD) is a large-scale theory for the dynamics of many-body integrable systems. It consists of an infinite set of conservation laws for quasi-particles traveling with effective ("dressed") velocities that depend on the local state. We show that these equations can be recast into a geometric dynamical problem. They are conservation equations with state-independent quasi-particle velocities, in a space equipped with a family of metrics, parametrized by the quasi-particles' type and speed, that depend on the local state. In the classical hard rod or soliton gas picture, these metrics measure the free length of space as perceived by quasi-particles; in the quantum picture, they weigh space with the density of states available to them. Using this geometric construction, we find a general solution to the initial value problem of GHD, in terms of a set of integral equations where time appears explicitly. These integral equations are solvable by iteration and provide an extremely efficient solution algorithm for GHD.

  10. Optimisation study of a vehicle bumper subsystem with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.

    2012-10-01

    This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).

  11. Rocketdyne LOX bearing tester program

    NASA Technical Reports Server (NTRS)

    Keba, J. E.; Beatty, R. F.

    1988-01-01

    The cause, or causes, for the Space Shuttle Main Engine ball wear were unknown, however, several mechanisms were suspected. Two testers were designed and built for operation in liquid oxygen to empirically gain insight into the problems and iterate solutions in a timely and cost efficient manner independent of engine testing. Schedules and test plans were developed that defined a test matrix consisting of parametric variations of loading, cooling or vapor margin, cage lubrication, material, and geometry studies. Initial test results indicated that the low pressure pump thrust bearing surface distress is a function of high axial load. Initial high pressure turbopump bearing tests produced the wear phenomenon observed in the turbopump and identified an inadequate vapor margin problem and a coolant flowrate sensitivity issue. These tests provided calibration data of analytical model predictions to give high confidence in the positive impact of future turbopump design modification for flight. Various modifications will be evaluated in these testers, since similar turbopump conditions can be produced and the benefit of the modification will be quantified in measured wear life comparisons.

  12. High performance computation of residual stress and distortion in laser welded 301L stainless sheets

    DOE PAGES

    Huang, Hui; Tsutsumi, Seiichiro; Wang, Jiandong; ...

    2017-07-11

    Transient thermo-mechanical simulation of stainless plate laser welding process was performed by a highly efficient and accurate approach-hybrid iterative substructure and adaptive mesh method. Especially, residual stress prediction was enhanced by considering various heat effects in the numerical model. The influence of laser welding heat input on residual stress and welding distortion of stainless thin sheets were investigated by experiment and simulation. X-ray diffraction (XRD) and contour method were used to measure the surficial and internal residual stress respectively. Effect of strain hardening, annealing and melting on residual stress prediction was clarified through a parametric study. It was shown thatmore » these heat effects must be taken into account for accurate prediction of residual stresses in laser welded stainless sheets. Reasonable agreement among residual stresses by numerical method, XRD and contour method was obtained. Buckling type welding distortion was also well reproduced by the developed thermo-mechanical FEM.« less

  13. High performance computation of residual stress and distortion in laser welded 301L stainless sheets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hui; Tsutsumi, Seiichiro; Wang, Jiandong

    Transient thermo-mechanical simulation of stainless plate laser welding process was performed by a highly efficient and accurate approach-hybrid iterative substructure and adaptive mesh method. Especially, residual stress prediction was enhanced by considering various heat effects in the numerical model. The influence of laser welding heat input on residual stress and welding distortion of stainless thin sheets were investigated by experiment and simulation. X-ray diffraction (XRD) and contour method were used to measure the surficial and internal residual stress respectively. Effect of strain hardening, annealing and melting on residual stress prediction was clarified through a parametric study. It was shown thatmore » these heat effects must be taken into account for accurate prediction of residual stresses in laser welded stainless sheets. Reasonable agreement among residual stresses by numerical method, XRD and contour method was obtained. Buckling type welding distortion was also well reproduced by the developed thermo-mechanical FEM.« less

  14. Design and fabrication of a boron reinforced intertank skirt

    NASA Technical Reports Server (NTRS)

    Henshaw, J.; Roy, P. A.; Pylypetz, P.

    1974-01-01

    Analytical and experimental studies were performed to evaluate the structural efficiency of a boron reinforced shell, where the medium of reinforcement consists of hollow aluminum extrusions infiltrated with boron epoxy. Studies were completed for the design of a one-half scale minimum weight shell using boron reinforced stringers and boron reinforced rings. Parametric and iterative studies were completed for the design of minimum weight stringers, rings, shells without rings and shells with rings. Computer studies were completed for the final evaluation of a minimum weight shell using highly buckled minimum gage skin. The detail design is described of a practical minimum weight test shell which demonstrates a weight savings of 30% as compared to an all aluminum longitudinal stiffened shell. Sub-element tests were conducted on representative segments of the compression surface at maximum stress and also on segments of the load transfer joint. A 10 foot long, 77 inch diameter shell was fabricated from the design and delivered for further testing.

  15. Spitzer Instrument Pointing Frame (IPF) Kalman Filter Algorithm

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kang, Bryan H.

    2004-01-01

    This paper discusses the Spitzer Instrument Pointing Frame (IPF) Kalman Filter algorithm. The IPF Kalman filter is a high-order square-root iterated linearized Kalman filter, which is parametrized for calibrating the Spitzer Space Telescope focal plane and aligning the science instrument arrays with respect to the telescope boresight. The most stringent calibration requirement specifies knowledge of certain instrument pointing frames to an accuracy of 0.1 arcseconds, per-axis, 1-sigma relative to the Telescope Pointing Frame. In order to achieve this level of accuracy, the filter carries 37 states to estimate desired parameters while also correcting for expected systematic errors due to: (1) optical distortions, (2) scanning mirror scale-factor and misalignment, (3) frame alignment variations due to thermomechanical distortion, and (4) gyro bias and bias-drift in all axes. The resulting estimated pointing frames and calibration parameters are essential for supporting on-board precision pointing capability, in addition to end-to-end 'pixels on the sky' ground pointing reconstruction efforts.

  16. Function-Space-Based Solution Scheme for the Size-Modified Poisson-Boltzmann Equation in Full-Potential DFT.

    PubMed

    Ringe, Stefan; Oberhofer, Harald; Hille, Christoph; Matera, Sebastian; Reuter, Karsten

    2016-08-09

    The size-modified Poisson-Boltzmann (MPB) equation is an efficient implicit solvation model which also captures electrolytic solvent effects. It combines an account of the dielectric solvent response with a mean-field description of solvated finite-sized ions. We present a general solution scheme for the MPB equation based on a fast function-space-oriented Newton method and a Green's function preconditioned iterative linear solver. In contrast to popular multigrid solvers, this approach allows us to fully exploit specialized integration grids and optimized integration schemes. We describe a corresponding numerically efficient implementation for the full-potential density-functional theory (DFT) code FHI-aims. We show that together with an additional Stern layer correction the DFT+MPB approach can describe the mean activity coefficient of a KCl aqueous solution over a wide range of concentrations. The high sensitivity of the calculated activity coefficient on the employed ionic parameters thereby suggests to use extensively tabulated experimental activity coefficients of salt solutions for a systematic parametrization protocol.

  17. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    PubMed

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. A computational study of thrust augmenting ejectors based on a viscous-inviscid approach

    NASA Technical Reports Server (NTRS)

    Lund, Thomas S.; Tavella, Domingo A.; Roberts, Leonard

    1987-01-01

    A viscous-inviscid interaction technique is advocated as both an efficient and accurate means of predicting the performance of two-dimensional thrust augmenting ejectors. The flow field is subdivided into a viscous region that contains the turbulent jet and an inviscid region that contains the ambient fluid drawn into the device. The inviscid region is computed with a higher-order panel method, while an integral method is used for the description of the viscous part. The strong viscous-inviscid interaction present within the ejector is simulated in an iterative process where the two regions influence each other en route to a converged solution. The model is applied to a variety of parametric and optimization studies involving ejectors having either one or two primary jets. The effects of nozzle placement, inlet and diffuser shape, free stream speed, and ejector length are investigated. The inlet shape for single jet ejectors is optimized for various free stream speeds and Reynolds numbers. Optimal nozzle tilt and location are identified for various dual-ejector configurations.

  19. The ITPA disruption database

    DOE PAGES

    Eidietis, N. W.; Gerhardt, S. P.; Granetz, R. S.; ...

    2015-05-22

    A multi-device database of disruption characteristics has been developed under the auspices of the International Tokamak Physics Activity magneto hydrodynamics topical group. The purpose of this ITPA Disruption Database (IDDB) is to find the commonalities between the disruption and disruption mitigation characteristics in a wide variety of tokamaks in order to elucidate the physics underlying tokamak disruptions and to extrapolate toward much larger devices, such as ITER and future burning plasma devices. Conversely, in order to previous smaller disruption data collation efforts, the IDDB aims to provide significant context for each shot provided, allowing exploration of a wide array ofmore » relationships between pre-disruption and disruption parameters. Furthermore, the IDDB presently includes contributions from nine tokamaks, including both conventional aspect ratio and spherical tokamaks. An initial parametric analysis of the available data is presented. Our analysis includes current quench rates, halo current fraction and peaking, and the effectiveness of massive impurity injection. The IDDB is publicly available, with instruction for access provided herein.« less

  20. Aerothermodynamic testing requirements for future space transportation systems

    NASA Technical Reports Server (NTRS)

    Paulson, John W., Jr.; Miller, Charles G., III

    1995-01-01

    Aerothermodynamics, encompassing aerodynamics, aeroheating, and fluid dynamic and physical processes, is the genesis for the design and development of advanced space transportation vehicles. It provides crucial information to other disciplines involved in the development process such as structures, materials, propulsion, and avionics. Sources of aerothermodynamic information include ground-based facilities, computational fluid dynamic (CFD) and engineering computer codes, and flight experiments. Utilization of this triad is required to provide the optimum requirements while reducing undue design conservatism, risk, and cost. This paper discusses the role of ground-based facilities in the design of future space transportation system concepts. Testing methodology is addressed, including the iterative approach often required for the assessment and optimization of configurations from an aerothermodynamic perspective. The influence of vehicle shape and the transition from parametric studies for optimization to benchmark studies for final design and establishment of the flight data book is discussed. Future aerothermodynamic testing requirements including the need for new facilities are also presented.

  1. Crack propagation in aluminum sheets reinforced with boron-epoxy

    NASA Technical Reports Server (NTRS)

    Roderick, G. L.

    1979-01-01

    An analysis was developed to predict both the crack growth and debond growth in a reinforced system. The analysis was based on the use of complex variable Green's functions for cracked, isotropic sheets and uncracked, orthotropic sheets to calculate inplane and interlaminar stresses, stress intensities, and strain-energy-release rates. An iterative solution was developed that used the stress intensities and strain-energy-release rates to predict crack and debond growths, respectively, on a cycle-by-cycle basis. A parametric study was made of the effects of boron-epoxy composite reinforcement on crack propagation in aluminum sheets. Results show that the size of the debond area has a significant effect on the crack propagation in the aluminum. For small debond areas, the crack propagation rate is reduced significantly, but these small debonds have a strong tendency to enlarge. Debond growth is most likely to occur in reinforced systems that have a cracked metal sheet reinforced with a relatively thin composite sheet.

  2. The 3D Recognition, Generation, Fusion, Update and Refinement (RG4) Concept

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Cheeseman, Peter; Smelyanskyi, Vadim N.; Kuehnel, Frank; Morris, Robin D.; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper describes an active (real time) recognition strategy whereby information is inferred iteratively across several viewpoints in descent imagery. We will show how we use inverse theory within the context of parametric model generation, namely height and spectral reflection functions, to generate model assertions. Using this strategy in an active context implies that, from every viewpoint, the proposed system must refine its hypotheses taking into account the image and the effect of uncertainties as well. The proposed system employs probabilistic solutions to the problem of iteratively merging information (images) from several viewpoints. This involves feeding the posterior distribution from all previous images as a prior for the next view. Novel approaches will be developed to accelerate the inversion search using novel statistic implementations and reducing the model complexity using foveated vision. Foveated vision refers to imagery where the resolution varies across the image. In this paper, we allow the model to be foveated where the highest resolution region is called the foveation region. Typically, the images will have dynamic control of the location of the foveation region. For descent imagery in the Entry, Descent, and Landing (EDL) process, it is possible to have more than one foveation region. This research initiative is directed towards descent imagery in connection with NASA's EDL applications. Three-Dimensional Model Recognition, Generation, Fusion, Update, and Refinement (RGFUR or RG4) for height and the spectral reflection characteristics are in focus for various reasons, one of which is the prospect that their interpretation will provide for real time active vision for automated EDL.

  3. Quasi-linear modeling of lower hybrid current drive in ITER and DEMO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardinali, A., E-mail: alessandro.cardinali@enea.it; Cesario, R.; Panaccione, L.

    2015-12-10

    First pass absorption of the Lower Hybrid waves in thermonuclear devices like ITER and DEMO is modeled by coupling the ray tracing equations with the quasi-linear evolution of the electron distribution function in 2D velocity space. As usually assumed, the Lower Hybrid Current Drive is not effective in a plasma of a tokamak fusion reactor, owing to the accessibility condition which, depending on the density, restricts the parallel wavenumber to values greater than n{sub ∥crit} and, at the same time, to the high electron temperature that would enhance the wave absorption and then restricts the RF power deposition to themore » very periphery of the plasma column (near the separatrix). In this work, by extensively using the “ray{sup star}” code, a parametric study of the propagation and absorption of the LH wave as function of the coupled wave spectrum (as its width, and peak value), has been performed very accurately. Such a careful investigation aims at controlling the power deposition layer possibly in the external half radius of the plasma, thus providing a valuable aid to the solution of how to control the plasma current profile in a toroidal magnetic configuration, and how to help the suppression of MHD mode that can develop in the outer part of the plasma. This analysis is useful not only for exploring the possibility of profile control of a pulsed operation reactor as well as the tearing mode stabilization, but also in order to reconsider the feasibility of steady state regime for DEMO.« less

  4. Automatic 3D segmentation of spinal cord MRI using propagated deformable models

    NASA Astrophysics Data System (ADS)

    De Leener, B.; Cohen-Adad, J.; Kadoury, S.

    2014-03-01

    Spinal cord diseases or injuries can cause dysfunction of the sensory and locomotor systems. Segmentation of the spinal cord provides measures of atrophy and allows group analysis of multi-parametric MRI via inter-subject registration to a template. All these measures were shown to improve diagnostic and surgical intervention. We developed a framework to automatically segment the spinal cord on T2-weighted MR images, based on the propagation of a deformable model. The algorithm is divided into three parts: first, an initialization step detects the spinal cord position and orientation by using the elliptical Hough transform on multiple adjacent axial slices to produce an initial tubular mesh. Second, a low-resolution deformable model is iteratively propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a contrast adaptation at each iteration. Third, a refinement process and a global deformation are applied on the low-resolution mesh to provide an accurate segmentation of the spinal cord. Our method was evaluated against a semi-automatic edge-based snake method implemented in ITK-SNAP (with heavy manual adjustment) by computing the 3D Dice coefficient, mean and maximum distance errors. Accuracy and robustness were assessed from 8 healthy subjects. Each subject had two volumes: one at the cervical and one at the thoracolumbar region. Results show a precision of 0.30 +/- 0.05 mm (mean absolute distance error) in the cervical region and 0.27 +/- 0.06 mm in the thoracolumbar region. The 3D Dice coefficient was of 0.93 for both regions.

  5. Least median of squares and iteratively re-weighted least squares as robust linear regression methods for fluorimetric determination of α-lipoic acid in capsules in ideal and non-ideal cases of linearity.

    PubMed

    Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F

    2018-06-01

    This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Curvelet-domain multiple matching method combined with cubic B-spline function

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  7. Electronic cleansing for CT colonography using spectral-driven iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Nasirudin, Radin A.; Näppi, Janne J.; Hironaka, Toru; Tachibana, Rie; Yoshida, Hiroyuki

    2017-03-01

    Dual-energy computed tomography is used increasingly in CT colonography (CTC). The combination of computer-aided detection (CADe) and dual-energy CTC (DE-CTC) has high clinical value, because it can detect clinically significant colonic lesions automatically at higher accuracy than does conventional single-energy CTC. While CADe has demonstrated its ability to detect small polyps, its performance is highly dependent on several factors, including the quality of CTC images and electronic cleansing (EC) of the images. The presence of artifacts such as beam hardening and image noise in ultra-low-dose CTC can produce incorrectly cleansed colon images that severely degrade the detection performance of CTC for small polyps. Also, CADe methods are very dependent on the quality of input images and the information about different tissues in the colon. In this work, we developed a novel method to calculate EC images using spectral information from DE-CTC data. First, the ultra-low dose dual-energy projection data obtained from a CT scanner are decomposed into two materials, soft tissue and the orally administered fecal-tagging contrast agent, to detect the location and intensity of the contrast agent. Next, the images are iteratively reconstructed while gradually removing the presence of tagged materials from the images. Our preliminary qualitative results show that the method can cleanse the contrast agent and tagged materials correctly from DE-CTC images without affecting the appearance of surrounding tissue.

  8. Monitoring of metallic contaminants in energy drinks using ICP-MS.

    PubMed

    Kilic, Serpil; Cengiz, Mehmet Fatih; Kilic, Murat

    2018-03-09

    In this study, an improved method was validated for the determination of some metallic contaminants (arsenic (As), chromium (Cr), cadmium (Cd), lead (Pb), iron (Fe), nickel (Ni), copper (Cu), Mn, and antimony (Sb)) in energy drinks using inductive coupled plasma mass spectrometry (ICP-MS). The validation procedure was applied for the evaluation of linearity, repeatability, recovery, limit of detection, and quantification. In addition, to verify the trueness of the method, it was participated in an interlaboratory proficiency test for heavy metals in soft drink organized by the LGC (Laboratory of the Government Chemist) Standard. Validated method was used to monitor for the determination of metallic contaminants in commercial energy drink samples. Concentrations of As, Cr, Cd, Pb, Fe, Ni, Cu, Mn, and Sb in the samples were found in the ranges of 0.76-6.73, 13.25-100.96, 0.16-2.11, 9.33-28.96, 334.77-937.12, 35.98-303.97, 23.67-60.48, 5.45-489.93, and 0.01-0.42 μg L -1 , respectively. The results were compared with the provisional guideline or parametric values of the elements for drinking waters set by the WHO (World Health Organization) and EC (European Commission). As, Cd, Cu, and Sb did not exceed the WHO and EC provisional guideline or parametric values. However, the other elements (Cr, Pb, Fe, Ni, and Mn) were found to be higher than their relevant limits at various levels.

  9. Entropic One-Class Classifiers.

    PubMed

    Livi, Lorenzo; Sadeghian, Alireza; Pedrycz, Witold

    2015-12-01

    The one-class classification problem is a well-known research endeavor in pattern recognition. The problem is also known under different names, such as outlier and novelty/anomaly detection. The core of the problem consists in modeling and recognizing patterns belonging only to a so-called target class. All other patterns are termed nontarget, and therefore, they should be recognized as such. In this paper, we propose a novel one-class classification system that is based on an interplay of different techniques. Primarily, we follow a dissimilarity representation-based approach; we embed the input data into the dissimilarity space (DS) by means of an appropriate parametric dissimilarity measure. This step allows us to process virtually any type of data. The dissimilarity vectors are then represented by weighted Euclidean graphs, which we use to determine the entropy of the data distribution in the DS and at the same time to derive effective decision regions that are modeled as clusters of vertices. Since the dissimilarity measure for the input data is parametric, we optimize its parameters by means of a global optimization scheme, which considers both mesoscopic and structural characteristics of the data represented through the graphs. The proposed one-class classifier is designed to provide both hard (Boolean) and soft decisions about the recognition of test patterns, allowing an accurate description of the classification process. We evaluate the performance of the system on different benchmarking data sets, containing either feature-based or structured patterns. Experimental results demonstrate the effectiveness of the proposed technique.

  10. Statistical plant set estimation using Schroeder-phased multisinusoidal input design

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.

    1992-01-01

    A frequency domain method is developed for plant set estimation. The estimation of a plant 'set' rather than a point estimate is required to support many methods of modern robust control design. The approach here is based on using a Schroeder-phased multisinusoid input design which has the special property of placing input energy only at the discrete frequency points used in the computation. A detailed analysis of the statistical properties of the frequency domain estimator is given, leading to exact expressions for the probability distribution of the estimation error, and many important properties. It is shown that, for any nominal parametric plant estimate, one can use these results to construct an overbound on the additive uncertainty to any prescribed statistical confidence. The 'soft' bound thus obtained can be used to replace 'hard' bounds presently used in many robust control analysis and synthesis methods.

  11. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  12. New Gogny interaction suitable for astrophysical applications

    NASA Astrophysics Data System (ADS)

    Gonzalez-Boquera, C.; Centelles, M.; Viñas, X.; Robledo, L. M.

    2018-04-01

    The D1 family of parametrizations of the Gogny interaction commonly suffers from a rather soft neutron matter equation of state that leads to maximal masses of neutron stars well below the observational value of two solar masses. We propose a reparametrization scheme that preserves the good properties of the Gogny force but allows one to tune the density dependence of the symmetry energy, which, in turn, modifies the predictions for the maximum stellar mass. The scheme works well for D1M, and leads to a new parameter set, dubbed D1M*. In the neutron-star domain, D1M* predicts a maximal mass of two solar masses and global properties of the star in harmony with those obtained with the SLy4 Skyrme interaction. By means of a set of selected calculations in finite nuclei, we check that D1M* performs comparably well to D1M in several aspects of nuclear structure in nuclei.

  13. Non-native three-dimensional block copolymer morphologies

    DOE PAGES

    Rahman, Atikur; Majewski, Pawel W.; Doerk, Gregory; ...

    2016-12-22

    Self-assembly is a powerful paradigm, wherein molecules spontaneously form ordered phases exhibiting well-defined nanoscale periodicity and shapes. However, the inherent energy-minimization aspect of self-assembly yields a very limited set of morphologies, such as lamellae or hexagonally packed cylinders. Here, we show how soft self-assembling materials—block copolymer thin films—can be manipulated to form a diverse library of previously unreported morphologies. In this iterative assembly process, each polymer layer acts as both a structural component of the final morphology and a template for directing the order of subsequent layers. Specifically, block copolymer films are immobilized on surfaces, and template successive layers throughmore » subtle surface topography. As a result, this strategy generates an enormous variety of three-dimensional morphologies that are absent in the native block copolymer phase diagram.« less

  14. Practical Use of Operation Data in the Process Industry

    NASA Astrophysics Data System (ADS)

    Kano, Manabu

    This paper aims to reveal real problems in the process industry and introduce recent development to solve such problems from the viewpoint of effective use of operation data. Two topics are discussed: virtual sensor and process control. First, in order to clarify the present state and problems, a part of our recent questionnaire survey of process control is quoted. It is emphasized that maintenance is a key issue not only for soft-sensors but also for controllers. Then, new techniques are explained. The first one is correlation-based just-in-time modeling (CoJIT), which can realize higher prediction performance than conventional methods and simplify model maintenance. The second is extended fictitious reference iterative tuning (E-FRIT), which can realize data-driven PID control parameter tuning without process modeling. The great usefulness of these techniques are demonstrated through their industrial applications.

  15. Non-native three-dimensional block copolymer morphologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, Atikur; Majewski, Pawel W.; Doerk, Gregory

    Self-assembly is a powerful paradigm, wherein molecules spontaneously form ordered phases exhibiting well-defined nanoscale periodicity and shapes. However, the inherent energy-minimization aspect of self-assembly yields a very limited set of morphologies, such as lamellae or hexagonally packed cylinders. Here, we show how soft self-assembling materials—block copolymer thin films—can be manipulated to form a diverse library of previously unreported morphologies. In this iterative assembly process, each polymer layer acts as both a structural component of the final morphology and a template for directing the order of subsequent layers. Specifically, block copolymer films are immobilized on surfaces, and template successive layers throughmore » subtle surface topography. As a result, this strategy generates an enormous variety of three-dimensional morphologies that are absent in the native block copolymer phase diagram.« less

  16. Modal analysis and dynamic stresses for acoustically excited Shuttle insulation tiles

    NASA Technical Reports Server (NTRS)

    Ojalvo, I. U.; Ogilvie, P. I.

    1976-01-01

    The thermal protection system of the Space Shuttle consists of thousands of separate insulation tiles, of varying thicknesses, bonded to the orbiter's surface through a soft strain-isolation pad which is bonded, in turn, to the vehicle's stiffened metallic skin. A modal procedure for obtaining the acoustically induced RMS stress in these comparatively thick tiles is described. The modes employed are generated by a previously developed iterative procedure which converges rapidly for the combined system of tiles and primary structure considered. Each tile is idealized by several hundred three-dimensional finite elements and all tiles on a given panel interact dynamically. Acoustic response results from the present analyses are presented. Comparisons with other analytical results and measured modal data for a typical Shuttle panel, both with and without tiles, are made, and the agreement is good.

  17. Modeling CO2 degassing and pH in a stream-aquifer system

    USGS Publications Warehouse

    Choi, J.; Hulseapple, S.M.; Conklin, M.H.; Harvey, J.W.

    1998-01-01

    Pinal Creek, Arizona receives an inflow of ground water with high dissolved inorganic carbon (57-75 mg/l) and low pH (5.8-6.3). There is an observed increase of in-stream pH from approximately 6.0-7.8 over the 3 km downstream of the point of groundwater inflow. We hypothesized that CO2 gas-exchange was the most important factor causing the pH increase in this stream-aquifer system. An existing transport model, for coupled ground water-surface water systems (OTIS), was modified to include carbonate equilibria and CO2 degassing, used to simulate alkalinity, total dissolved inorganic carbon (C(T)), and pH in Pinal Creek. Because of the non-linear relation between pH and C(T), the modified transport model used the numerical iteration method to solve the non-linearity. The transport model parameters were determined by the injection of two tracers, bromide and propane. The resulting simulations of alkalinity, C(T) and pH reproduced, without fitting, the overall trends in downstream concentrations. A multi-parametric sensitivity analysis (MPSA) was used to identify the relative sensitivities of the predictions to six of the physical and chemical parameters used in the transport model. MPSA results implied that C(T) and pH in stream water were controlled by the mixing of ground water with stream water and CO2 degassing. The relative importance of these two processes varied spatially depending on the hydrologic conditions, such as stream flow velocity and whether a reach gained or lost stream water caused by the interaction with the ground water. The coupled transport model with CO2 degassing and generalized sensitivity analysis presented in this study can be applied to evaluate carbon transport and pH in other coupled stream-ground water systems.An existing transport model for coupled groundwater-surface water systems was modified to include carbonate equilibria and CO2 degassing. The modified model was used to simulate alkalinity, total dissolved inorganic carbon (CT) and pH in Pinal Creek. The model used the numerical iteration method to solve the nonlinear relation between pH and CT. A multi-parametric sensitivity analysis (MPSA) was used to identify the relative sensitivities of the predictions to six of the physical and chemical parameters used in the transport model. MPSA results implied that CT and pH in the stream water were controlled by the mixing of groundwater with stream water and CO2 degassing.

  18. Hybrid integral-differential simulator of EM force interactions/scenario-assessment tool with pre-computed influence matrix in applications to ITER

    NASA Astrophysics Data System (ADS)

    Rozov, V.; Alekseev, A.

    2015-08-01

    A necessity to address a wide spectrum of engineering problems in ITER determined the need for efficient tools for modeling of the magnetic environment and force interactions between the main components of the magnet system. The assessment of the operating window for the machine, determined by the electro-magnetic (EM) forces, and the check of feasibility of particular scenarios play an important role for ensuring the safety of exploitation. Such analysis-powered prevention of damages forms an element of the Machine Operations and Investment Protection strategy. The corresponding analysis is a necessary step in preparation of the commissioning, which finalizes the construction phase. It shall be supported by the development of the efficient and robust simulators and multi-physics/multi-system integration of models. The developed numerical model of interactions in the ITER magnetic system, based on the use of pre-computed influence matrices, facilitated immediate and complete assessment and systematic specification of EM loads on magnets in all foreseen operating regimes, their maximum values, envelopes and the most critical scenarios. The common principles of interaction in typical bilateral configurations have been generalized for asymmetry conditions, inspired by the plasma and by the hardware, including asymmetric plasma event and magnetic system fault cases. The specification of loads is supported by the technology of functional approximation of nodal and distributed data by continuous patterns/analytical interpolants. The global model of interactions together with the mesh-independent analytical format of output provides the source of self-consistent and transferable data on the spatial distribution of the system of forces for assessments of structural performance of the components, assemblies and supporting structures. The numerical model used is fully parametrized, which makes it very suitable for multi-variant and sensitivity studies (positioning, off-normal events, asymmetry, etc). The obtained results and matrices form a basis for a relatively simple and robust force processor as a specialized module of a global simulator for diagnostic, operational instrumentation, monitoring and control, as well as a scenario assessment tool. This paper gives an overview of the model, applied technique, assessed problems and obtained qualitative and quantitative results.

  19. Global Adjoint Tomography: Next-Generation Models

    NASA Astrophysics Data System (ADS)

    Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Orsvuran, Ridvan; Peter, Daniel; Ruan, Youyi; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen

    2017-04-01

    The first-generation global adjoint tomography model GLAD-M15 (Bozdag et al. 2016) is the result of 15 conjugate-gradient iterations based on GPU-accelerated spectral-element simulations of 3D wave propagation and Fréchet kernels. For simplicity, GLAD-M15 was constructed as an elastic model with transverse isotropy confined to the upper mantle. However, Earth's mantle and crust show significant evidence of anisotropy as a result of its composition and deformation. There may be different sources of seismic anisotropy affecting both body and surface waves. As a first attempt, we initially tackle with surface-wave anisotropy and proceed iterations using the same 253 earthquake data set used in GLAD-M15 with an emphasize on upper-mantle. Furthermore, we explore new misfits, such as double-difference measurements (Yuan et al. 2016), to better deal with the possible artifacts of the uneven distribution of seismic stations globally and minimize source uncertainties in structural inversions. We will present our observations with the initial results of azimuthally anisotropic inversions and also discuss the next generation global models with various parametrizations. Meanwhile our goal is to use all available seismic data in imaging. This however requires a solid framework to perform iterative adjoint tomography workflows with big data on supercomputers. We will talk about developments in adjoint tomography workflow from the need of defining new seismic and computational data formats (e.g., ASDF by Krischer et al. 2016, ADIOS by Liu et al. 2011) to developing new pre- and post-processing tools together with experimenting workflow management tools, such as Pegasus (Deelman et al. 2015). All our simulations are performed on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Our ultimate aim is to get ready to harness ORNL's next-generation supercomputer "Summit", an IBM with Power-9 CPUs and NVIDIA Volta GPU accelerators, to be ready by 2018 which will enable us to reduce the shortest period in our global simulations from 17 s to 9 s, and exascale systems will reduce this further to just a few seconds.

  20. Fast elastic registration of soft tissues under large deformations.

    PubMed

    Peterlík, Igor; Courtecuisse, Hadrien; Rohling, Robert; Abolmaesumi, Purang; Nguan, Christopher; Cotin, Stéphane; Salcudean, Septimiu

    2018-04-01

    A fast and accurate fusion of intra-operative images with a pre-operative data is a key component of computer-aided interventions which aim at improving the outcomes of the intervention while reducing the patient's discomfort. In this paper, we focus on the problematic of the intra-operative navigation during abdominal surgery, which requires an accurate registration of tissues undergoing large deformations. Such a scenario occurs in the case of partial hepatectomy: to facilitate the access to the pathology, e.g. a tumor located in the posterior part of the right lobe, the surgery is performed on a patient in lateral position. Due to the change in patient's position, the resection plan based on the pre-operative CT scan acquired in the supine position must be updated to account for the deformations. We suppose that an imaging modality, such as the cone-beam CT, provides the information about the intra-operative shape of an organ, however, due to the reduced radiation dose and contrast, the actual locations of the internal structures necessary to update the planning are not available. To this end, we propose a method allowing for fast registration of the pre-operative data represented by a detailed 3D model of the liver and its internal structure and the actual configuration given by the organ surface extracted from the intra-operative image. The algorithm behind the method combines the iterative closest point technique with a biomechanical model based on a co-rotational formulation of linear elasticity which accounts for large deformations of the tissue. The performance, robustness and accuracy of the method is quantitatively assessed on a control semi-synthetic dataset with known ground truth and a real dataset composed of nine pairs of abdominal CT scans acquired in supine and flank positions. It is shown that the proposed surface-matching method is capable of reducing the target registration error evaluated of the internal structures of the organ from more than 40 mm to less then 10 mm. Moreover, the control data is used to demonstrate the compatibility of the method with intra-operative clinical scenario, while the real datasets are utilized to study the impact of parametrization on the accuracy of the method. The method is also compared to a state-of-the art intensity-based registration technique in terms of accuracy and performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A clinically applicable non-invasive method to quantitatively assess the visco-hyperelastic properties of human heel pad, implications for assessing the risk of mechanical trauma.

    PubMed

    Behforootan, Sara; Chatzistergos, Panagiotis E; Chockalingam, Nachiappan; Naemi, Roozbeh

    2017-04-01

    Pathological conditions such as diabetic foot and plantar heel pain are associated with changes in the mechanical properties of plantar soft tissue. However, the causes and implications of these changes are not yet fully understood. This is mainly because accurate assessment of the mechanical properties of plantar soft tissue in the clinic remains extremely challenging. To develop a clinically viable non-invasive method of assessing the mechanical properties of the heel pad. Furthermore the effect of non-linear mechanical behaviour of the heel pad on its ability to uniformly distribute foot-ground contact loads in light of the effect of overloading is also investigated. An automated custom device for ultrasound indentation was developed along with custom algorithms for the automated subject-specific modeling of heel pad. Non-time-dependent and time-dependent material properties were inverse engineered from results from quasi-static indentation and stress relaxation test respectively. The validity of the calculated coefficients was assessed for five healthy participants. The implications of altered mechanical properties on the heel pad's ability to uniformly distribute plantar loading were also investigated in a parametric analysis. The subject-specific heel pad models with coefficients calculated based on quasi-static indentation and stress relaxation were able to accurately simulate dynamic indentation. Average error in the predicted forces for maximum deformation was only 6.6±4.0%. When the inverse engineered coefficients were used to simulate the first instance of heel strike the error in terms of peak plantar pressure was 27%. The parametric analysis indicated that the heel pad's ability to uniformly distribute plantar loads is influenced both by its overall deformability and by its stress-strain behaviour. When overall deformability stays constant, changes in stress/strain behaviour leading to a more "linear" mechanical behaviour appear to improve the heel pad's ability to uniformly distribute plantar loading. The developed technique can accurately assess the visco-hyperelastic behaviour of heel pad. It was observed that specific change in stress-strain behaviour can enhance/weaken the heel pad's ability to uniformly distribute plantar loading that will increase/decrease the risk for overloading and trauma. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Dose reduction in abdominal computed tomography: intraindividual comparison of image quality of full-dose standard and half-dose iterative reconstructions with dual-source computed tomography.

    PubMed

    May, Matthias S; Wüst, Wolfgang; Brand, Michael; Stahl, Christian; Allmendinger, Thomas; Schmidt, Bernhard; Uder, Michael; Lell, Michael M

    2011-07-01

    We sought to evaluate the image quality of iterative reconstruction in image space (IRIS) in half-dose (HD) datasets compared with full-dose (FD) and HD filtered back projection (FBP) reconstruction in abdominal computed tomography (CT). To acquire data with FD and HD simultaneously, contrast-enhanced abdominal CT was performed with a dual-source CT system, both tubes operating at 120 kV, 100 ref.mAs, and pitch 0.8. Three different image datasets were reconstructed from the raw data: Standard FD images applying FBP which served as reference, HD images applying FBP and HD images applying IRIS. For the HD data sets, only data from 1 tube detector-system was used. Quantitative image quality analysis was performed by measuring image noise in tissue and air. Qualitative image quality was evaluated according to the European Guidelines on Quality criteria for CT. Additional assessment of artifacts, lesion conspicuity, and edge sharpness was performed. : Image noise in soft tissue was substantially decreased in HD-IRIS (-3.4 HU, -22%) and increased in HD-FBP (+6.2 HU, +39%) images when compared with the reference (mean noise, 15.9 HU). No significant differences between the FD-FBP and HD-IRIS images were found for the visually sharp anatomic reproduction, overall diagnostic acceptability (P = 0.923), lesion conspicuity (P = 0.592), and edge sharpness (P = 0.589), while HD-FBP was rated inferior. Streak artifacts and beam hardening was significantly more prominent in HD-FBP while HD-IRIS images exhibited a slightly different noise pattern. Direct intrapatient comparison of standard FD body protocols and HD-IRIS reconstruction suggest that the latest iterative reconstruction algorithms allow for approximately 50% dose reduction without deterioration of the high image quality necessary for confident diagnosis.

  3. SU-F-207-02: Use of Postmortem Subjects for Subjective Image Quality Assessment in Abdominal CT Protocols with Iterative Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mench, A; Lipnharski, I; Carranza, C

    Purpose: New radiation dose reduction technologies are emerging constantly in the medical imaging field. The latest of these technologies, iterative reconstruction (IR) in CT, presents the ability to reduce dose significantly and hence provides great opportunity for CT protocol optimization. However, without effective analysis of image quality, the reduction in radiation exposure becomes irrelevant. This work explores the use of postmortem subjects as an image quality assessment medium for protocol optimizations in abdominal CT. Methods: Three female postmortem subjects were scanned using the Abdomen-Pelvis (AP) protocol at reduced minimum tube current and target noise index (SD) settings of 12.5, 17.5,more » 20.0, and 25.0. Images were reconstructed using two strengths of iterative reconstruction. Radiologists and radiology residents from several subspecialties were asked to evaluate 8 AP image sets including the current facility default scan protocol and 7 scans with the parameters varied as listed above. Images were viewed in the soft tissue window and scored on a 3-point scale as acceptable, borderline acceptable, and unacceptable for diagnosis. The facility default AP scan was identified to the reviewer while the 7 remaining AP scans were randomized and de-identified of acquisition and reconstruction details. The observers were also asked to comment on the subjective image quality criteria they used for scoring images. This included visibility of specific anatomical structures and tissue textures. Results: Radiologists scored images as acceptable or borderline acceptable for target noise index settings of up to 20. Due to the postmortem subjects’ close representation of living human anatomy, readers were able to evaluate images as they would those of actual patients. Conclusion: Postmortem subjects have already been proven useful for direct CT organ dose measurements. This work illustrates the validity of their use for the crucial evaluation of image quality during CT protocol optimization, especially when investigating the effects of new technologies.« less

  4. New design of cable-in-conduit conductor for application in future fusion reactors

    NASA Astrophysics Data System (ADS)

    Qin, Jinggang; Wu, Yu; Li, Jiangang; Liu, Fang; Dai, Chao; Shi, Yi; Liu, Huajun; Mao, Zhehua; Nijhuis, Arend; Zhou, Chao; Yagotintsev, Konstantin A.; Lubkemann, Ruben; Anvar, V. A.; Devred, Arnaud

    2017-11-01

    The China Fusion Engineering Test Reactor (CFETR) is a new tokamak device whose magnet system includes toroidal field, central solenoid (CS) and poloidal field coils. The main goal is to build a fusion engineering tokamak reactor with about 1 GW fusion power and self-sufficiency by blanket. In order to reach this high performance, the magnet field target is 15 T. However, the huge electromagnetic load caused by high field and current is a threat for conductor degradation under cycling. The conductor with a short-twist-pitch (STP) design has large stiffness, which enables a significant performance improvement in view of load and thermal cycling. But the conductor with STP design has a remarkable disadvantage: it can easily cause severe strand indentation during cabling. The indentation can reduce the strand performance, especially under high load cycling. In order to overcome this disadvantage, a new design is proposed. The main characteristic of this new design is an updated layout in the triplet. The triplet is made of two Nb3Sn strands and one soft copper strand. The twist pitch of the two Nb3Sn strands is large and cabled first. The copper strand is then wound around the two superconducting strands (CWS) with a shorter twist pitch. The following cable stages layout and twist pitches are similar to the ITER CS conductor with STP design. One short conductor sample with a similar scale to the ITER CS was manufactured and tested with the Twente Cable Press to investigate the mechanical properties, AC loss and internal inspection by destructive examination. The results are compared to the STP conductor (ITER CS and CFETR CSMC) tests. The results show that the new conductor design has similar stiffness, but much lower strand indentation than the STP design. The new design shows potential for application in future fusion reactors.

  5. An iterative shrinkage approach to total-variation image restoration.

    PubMed

    Michailovich, Oleg V

    2011-05-01

    The problem of restoration of digital images from their degraded measurements plays a central role in a multitude of practically important applications. A particularly challenging instance of this problem occurs in the case when the degradation phenomenon is modeled by an ill-conditioned operator. In such a situation, the presence of noise makes it impossible to recover a valuable approximation of the image of interest without using some a priori information about its properties. Such a priori information--commonly referred to as simply priors--is essential for image restoration, rendering it stable and robust to noise. Moreover, using the priors makes the recovered images exhibit some plausible features of their original counterpart. Particularly, if the original image is known to be a piecewise smooth function, one of the standard priors used in this case is defined by the Rudin-Osher-Fatemi model, which results in total variation (TV) based image restoration. The current arsenal of algorithms for TV-based image restoration is vast. In this present paper, a different approach to the solution of the problem is proposed based upon the method of iterative shrinkage (aka iterated thresholding). In the proposed method, the TV-based image restoration is performed through a recursive application of two simple procedures, viz. linear filtering and soft thresholding. Therefore, the method can be identified as belonging to the group of first-order algorithms which are efficient in dealing with images of relatively large sizes. Another valuable feature of the proposed method consists in its working directly with the TV functional, rather then with its smoothed versions. Moreover, the method provides a single solution for both isotropic and anisotropic definitions of the TV functional, thereby establishing a useful connection between the two formulae. Finally, a number of standard examples of image deblurring are demonstrated, in which the proposed method can provide restoration results of superior quality as compared to the case of sparse-wavelet deconvolution.

  6. Radiation dose reduction in soft tissue neck CT using adaptive statistical iterative reconstruction (ASIR).

    PubMed

    Vachha, Behroze; Brodoefel, Harald; Wilcox, Carol; Hackney, David B; Moonis, Gul

    2013-12-01

    To compare objective and subjective image quality in neck CT images acquired at different tube current-time products (275 mAs and 340 mAs) and reconstructed with filtered-back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR). HIPAA-compliant study with IRB approval and waiver of informed consent. 66 consecutive patients were randomly assigned to undergo contrast-enhanced neck CT at a standard tube-current-time-product (340 mAs; n = 33) or reduced tube-current-time-product (275 mAs, n = 33). Data sets were reconstructed with FBP and 2 levels (30%, 40%) of ASIR-FBP blending at 340 mAs and 275 mAs. Two neuroradiologists assessed subjective image quality in a blinded and randomized manner. Volume CT dose index (CTDIvol), dose-length-product (DLP), effective dose, and objective image noise were recorded. Signal-to-noise ratio (SNR) was computed as mean attenuation in a region of interest in the sternocleidomastoid muscle divided by image noise. Compared with FBP, ASIR resulted in a reduction of image noise at both 340 mAs and 275 mAs. Reduction of tube current from 340 mAs to 275 mAs resulted in an increase in mean objective image noise (p=0.02) and a decrease in SNR (p = 0.03) when images were reconstructed with FBP. However, when the 275 mAs images were reconstructed using ASIR, the mean objective image noise and SNR were similar to those of the standard 340 mAs CT images reconstructed with FBP (p>0.05). Subjective image noise was ranked by both raters as either average or less-than-average irrespective of the tube current and iterative reconstruction technique. Adapting ASIR into neck CT protocols reduced effective dose by 17% without compromising image quality. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  8. Simulating the swelling and deformation behaviour in soft tissues using a convective thermal analogy

    PubMed Central

    Wu, John Z; Herzog, Walter

    2002-01-01

    Background It is generally accepted that cartilage adaptation and degeneration are mechanically mediated. Investigating the swelling behaviour of cartilage is important because the stress and strain state of cartilage is associated with the swelling and deformation behaviour. It is well accepted that the swelling of soft tissues is associated with mechanical, chemical, and electrical events. Method The purpose of the present study was to implement the triphasic theory into a commercial finite element tool (ABAQUS) to solve practical problems in cartilage mechanics. Because of the mathematical identity between thermal and mass diffusion processes, the triphasic model was transferred into a convective thermal diffusion process in the commercial finite element software. The problem was solved using an iterative procedure. Results The proposed approach was validated using the one-dimensional numerical solutions and the experimental results of confined compression of articular cartilage described in the literature. The time-history of the force response of a cartilage specimen in confined compression, which was subjected to swelling caused by a sudden change of saline concentration, was predicted using the proposed approach and compared with the published experimental data. Conclusion The advantage of the proposed thermal analogy technique over previous studies is that it accounts for the convective diffusion of ion concentrations and the Donnan osmotic pressure in the interstitial fluid. PMID:12685940

  9. Soft X-ray tomography in support of impurity control in tokamaks

    NASA Astrophysics Data System (ADS)

    Mlynar, J.; Mazon, D.; Imrisek, M.; Loffelmann, V.; Malard, P.; Odstrcil, T.; Tomes, M.; Vezinet, D.; Weinzettl, V.

    2016-10-01

    This contribution reviews an important example of current developments in diagnostic systems and data analysis tools aimed at improved understanding and control of transport processes in magnetically confined high temperature plasmas. The choice of tungsten for the plasma facing components of ITER and probably also DEMO means that impurity control in fusion plasmas is now a crucial challenge. Soft X-ray (SXR) diagnostic systems serve as a key sensor for experimental studies of plasma impurity transport with a clear prospective of its control via actuators based mainly on plasma heating systems. The SXR diagnostic systems typically feature high temporal resolution but limited spatial resolution due to access restrictions. In order to reconstruct the spatial distribution of the SXR radiation from line integrated measurements, appropriate tomographic methods have been developed and validated, while novel numerical methods relevant for real-time control have been proposed. Furthermore, in order to identify the main contributors to the SXR plasma radiation, at least partial control over the spectral sensitivity range of the detectors would be beneficial, which motivates for developments of novel SXR diagnostic methods. Last, but not least, semiconductor photosensitive elements cannot survive in harsh conditions of future fusion reactors due to radiation damage, which calls for development of radiation hard SXR detectors. Present research in this field is exemplified on recent results from tokamaks COMPASS, TORE SUPRA and the Joint European Torus JET. Further planning is outlined.

  10. Detection of soft tissue densities from digital breast tomosynthesis: comparison of conventional and deep learning approaches

    NASA Astrophysics Data System (ADS)

    Fotin, Sergei V.; Yin, Yin; Haldankar, Hrishikesh; Hoffmeister, Jeffrey W.; Periaswamy, Senthil

    2016-03-01

    Computer-aided detection (CAD) has been used in screening mammography for many years and is likely to be utilized for digital breast tomosynthesis (DBT). Higher detection performance is desirable as it may have an impact on radiologist's decisions and clinical outcomes. Recently the algorithms based on deep convolutional architectures have been shown to achieve state of the art performance in object classification and detection. Similarly, we trained a deep convolutional neural network directly on patches sampled from two-dimensional mammography and reconstructed DBT volumes and compared its performance to a conventional CAD algorithm that is based on computation and classification of hand-engineered features. The detection performance was evaluated on the independent test set of 344 DBT reconstructions (GE SenoClaire 3D, iterative reconstruction algorithm) containing 328 suspicious and 115 malignant soft tissue densities including masses and architectural distortions. Detection sensitivity was measured on a region of interest (ROI) basis at the rate of five detection marks per volume. Moving from conventional to deep learning approach resulted in increase of ROI sensitivity from 0:832 +/- 0:040 to 0:893 +/- 0:033 for suspicious ROIs; and from 0:852 +/- 0:065 to 0:930 +/- 0:046 for malignant ROIs. These results indicate the high utility of deep feature learning in the analysis of DBT data and high potential of the method for broader medical image analysis tasks.

  11. Reconstruction of multiple-pinhole micro-SPECT data using origin ensembles.

    PubMed

    Lyon, Morgan C; Sitek, Arkadiusz; Metzler, Scott D; Moore, Stephen C

    2016-10-01

    The authors are currently developing a dual-resolution multiple-pinhole microSPECT imaging system based on three large NaI(Tl) gamma cameras. Two multiple-pinhole tungsten collimator tubes will be used sequentially for whole-body "scout" imaging of a mouse, followed by high-resolution (hi-res) imaging of an organ of interest, such as the heart or brain. Ideally, the whole-body image will be reconstructed in real time such that data need only be acquired until the area of interest can be visualized well-enough to determine positioning for the hi-res scan. The authors investigated the utility of the origin ensemble (OE) algorithm for online and offline reconstructions of the scout data. This algorithm operates directly in image space, and can provide estimates of image uncertainty, along with reconstructed images. Techniques for accelerating the OE reconstruction were also introduced and evaluated. System matrices were calculated for our 39-pinhole scout collimator design. SPECT projections were simulated for a range of count levels using the MOBY digital mouse phantom. Simulated data were used for a comparison of OE and maximum-likelihood expectation maximization (MLEM) reconstructions. The OE algorithm convergence was evaluated by calculating the total-image entropy and by measuring the counts in a volume-of-interest (VOI) containing the heart. Total-image entropy was also calculated for simulated MOBY data reconstructed using OE with various levels of parallelization. For VOI measurements in the heart, liver, bladder, and soft-tissue, MLEM and OE reconstructed images agreed within 6%. Image entropy converged after ∼2000 iterations of OE, while the counts in the heart converged earlier at ∼200 iterations of OE. An accelerated version of OE completed 1000 iterations in <9 min for a 6.8M count data set, with some loss of image entropy performance, whereas the same dataset required ∼79 min to complete 1000 iterations of conventional OE. A combination of the two methods showed decreased reconstruction time and no loss of performance when compared to conventional OE alone. OE-reconstructed images were found to be quantitatively and qualitatively similar to MLEM, yet OE also provided estimates of image uncertainty. Some acceleration of the reconstruction can be gained through the use of parallel computing. The OE algorithm is useful for reconstructing multiple-pinhole SPECT data and can be easily modified for real-time reconstruction.

  12. Direct Parametric Image Reconstruction in Reduced Parameter Space for Rapid Multi-Tracer PET Imaging.

    PubMed

    Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu

    2015-02-12

    The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.

  13. K-edge ratio method for identification of multiple nanoparticulate contrast agents by spectral CT imaging

    PubMed Central

    Ghadiri, H; Ay, M R; Shiran, M B; Soltanian-Zadeh, H

    2013-01-01

    Objective: Recently introduced energy-sensitive X-ray CT makes it feasible to discriminate different nanoparticulate contrast materials. The purpose of this work is to present a K-edge ratio method for differentiating multiple simultaneous contrast agents using spectral CT. Methods: The ratio of two images relevant to energy bins straddling the K-edge of the materials is calculated using an analytic CT simulator. In the resulting parametric map, the selected contrast agent regions can be identified using a thresholding algorithm. The K-edge ratio algorithm is applied to spectral images of simulated phantoms to identify and differentiate up to four simultaneous and targeted CT contrast agents. Results: We show that different combinations of simultaneous CT contrast agents can be identified by the proposed K-edge ratio method when energy-sensitive CT is used. In the K-edge parametric maps, the pixel values for biological tissues and contrast agents reach a maximum of 0.95, whereas for the selected contrast agents, the pixel values are larger than 1.10. The number of contrast agents that can be discriminated is limited owing to photon starvation. For reliable material discrimination, minimum photon counts corresponding to 140 kVp, 100 mAs and 5-mm slice thickness must be used. Conclusion: The proposed K-edge ratio method is a straightforward and fast method for identification and discrimination of multiple simultaneous CT contrast agents. Advances in knowledge: A new spectral CT-based algorithm is proposed which provides a new concept of molecular CT imaging by non-iteratively identifying multiple contrast agents when they are simultaneously targeting different organs. PMID:23934964

  14. Real-time control of optimal low-thrust transfer to the Sun-Earth L 1 halo orbit in the bicircular four-body problem

    NASA Astrophysics Data System (ADS)

    Salmani, Majid; Büskens, Christof

    2011-11-01

    In this article, after describing a procedure to construct trajectories for a spacecraft in the four-body model, a method to correct the trajectory violations is presented. To construct the trajectories, periodic orbits as the solutions of the three-body problem are used. On the other hand, the bicircular model based on the Sun-Earth rotating frame governs the dynamics of the spacecraft and other bodies. A periodic orbit around the first libration-point L1 is the destination of the mission which is one of the equilibrium points in the Sun-Earth/Moon three-body problem. In the way to reach such a far destination, there are a lot of disturbances such as solar radiation and winds that make the plans untrustworthy. However, the solar radiation pressure is considered in the system dynamics. To prevail over these difficulties, considering the whole transfer problem as an optimal control problem makes the designer to be able to correct the unavoidable violations from the pre-designed trajectory and strategies. The optimal control problem is solved by a direct method, transcribing it into a nonlinear programming problem. This transcription gives an unperturbed optimal trajectory and its sensitivities with respect perturbations. Modeling these perturbations as parameters embedded in a parametric optimal control problem, one can take advantage of the parametric sensitivity analysis of nonlinear programming problem to recalculate the optimal trajectory with a very smaller amount of computation costs. This is obtained by evaluating a first-order Taylor expansion of the perturbed solution in an iterative process which is aimed to achieve an admissible solution. At the end, the numerical results show the applicability of the presented method.

  15. Sparse representation of multi parametric DCE-MRI features using K-SVD for classifying gene expression based breast cancer recurrence risk

    NASA Astrophysics Data System (ADS)

    Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina

    2014-03-01

    We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.

  16. A program for the Bayesian Neural Network in the ROOT framework

    NASA Astrophysics Data System (ADS)

    Zhong, Jiahang; Huang, Run-Sheng; Lee, Shih-Chang

    2011-12-01

    We present a Bayesian Neural Network algorithm implemented in the TMVA package (Hoecker et al., 2007 [1]), within the ROOT framework (Brun and Rademakers, 1997 [2]). Comparing to the conventional utilization of Neural Network as discriminator, this new implementation has more advantages as a non-parametric regression tool, particularly for fitting probabilities. It provides functionalities including cost function selection, complexity control and uncertainty estimation. An example of such application in High Energy Physics is shown. The algorithm is available with ROOT release later than 5.29. Program summaryProgram title: TMVA-BNN Catalogue identifier: AEJX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: BSD license No. of lines in distributed program, including test data, etc.: 5094 No. of bytes in distributed program, including test data, etc.: 1,320,987 Distribution format: tar.gz Programming language: C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system Operating system: Most UNIX/Linux systems. The application programs were thoroughly tested under Fedora and Scientific Linux CERN. Classification: 11.9 External routines: ROOT package version 5.29 or higher ( http://root.cern.ch) Nature of problem: Non-parametric fitting of multivariate distributions Solution method: An implementation of Neural Network following the Bayesian statistical interpretation. Uses Laplace approximation for the Bayesian marginalizations. Provides the functionalities of automatic complexity control and uncertainty estimation. Running time: Time consumption for the training depends substantially on the size of input sample, the NN topology, the number of training iterations, etc. For the example in this manuscript, about 7 min was used on a PC/Linux with 2.0 GHz processors.

  17. Formation and relaxation of quasistationary states in particle systems with power-law interactions

    NASA Astrophysics Data System (ADS)

    Marcos, B.; Gabrielli, A.; Joyce, M.

    2017-09-01

    We explore the formation and relaxation of the so-called quasistationary states (QSS) for particle distributions in three dimensions interacting via an attractive radial pair potential V (r →∞ ) ˜1 /rγ with γ >0 , and either a soft core or hard core regularization at small r . In the first part of the paper, we generalize, for any spatial dimension d ≥2 , Chandrasekhar's approach for the case of gravity to obtain analytic estimates of the rate of collisional relaxation due to two-body collisions. The resultant relaxation rates indicate an essential qualitative difference depending on the integrability of the pair force at large distances: for γ >d -1 , the rate diverges in the large particle number N (mean-field) limit, unless a sufficiently large soft core is present; for γ

  18. [Port wine stains or capillary malformations: surgical treatment].

    PubMed

    Berwald, C; Salazard, B; Bardot, J; Casanova, D; Magalon, G

    2006-01-01

    Capillary malformations do not demand mostly any therapeutics. For aesthetic reasons, family or child can demand a treatment to ease even to remove the unsightly character of the lesion. In this context, the means employees must be simple and not engender aftereffects more unaesthetic than the lesion. The pulsed dye laser fulfils perfectly this conditions by improving the color of the lesion without touching the texture of the skin. However it's a treatment requiring many sessions over 2-3 years. Surgery keeps an interest for the treatment of capillary malformations resistant to laser (in particular on the limbs) or to treat soft tissues hyperplasia met in certain cervicofacial locations. The surgery uses the whole techniques of plastic surgery classified from the most simple to the most complicated: excision-suture in one time or iterative, excision and coverage by a skin graft, use of skin expansion techniques with local flaps.

  19. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  20. Tomographic capabilities of the new GEM based SXR diagnostic of WEST

    NASA Astrophysics Data System (ADS)

    Jardin, A.; Mazon, D.; O'Mullane, M.; Mlynar, J.; Loffelmann, V.; Imrisek, M.; Chernyshova, M.; Czarski, T.; Kasprowicz, G.; Wojenski, A.; Bourdelle, C.; Malard, P.

    2016-07-01

    The tokamak WEST (Tungsten Environment in Steady-State Tokamak) will start operating by the end of 2016 as a test bed for the ITER divertor components in long pulse operation. In this context, radiative cooling of heavy impurities like tungsten (W) in the Soft X-ray (SXR) range [0.1 keV; 20 keV] is a critical issue for the plasma core performances. Thus reliable tools are required to monitor the local impurity density and avoid W accumulation. The WEST SXR diagnostic will be equipped with two new GEM (Gas Electron Multiplier) based poloidal cameras allowing to perform 2D tomographic reconstructions in tunable energy bands. In this paper tomographic capabilities of the Minimum Fisher Information (MFI) algorithm developed for Tore Supra and upgraded for WEST are investigated, in particular through a set of emissivity phantoms and the standard WEST scenario including reconstruction errors, influence of noise as well as computational time.

  1. Translucent Radiosity: Efficiently Combining Diffuse Inter-Reflection and Subsurface Scattering.

    PubMed

    Sheng, Yu; Shi, Yulong; Wang, Lili; Narasimhan, Srinivasa G

    2014-07-01

    It is hard to efficiently model the light transport in scenes with translucent objects for interactive applications. The inter-reflection between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows, and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflection or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflection and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates.

  2. Hysteresis in column systems

    NASA Astrophysics Data System (ADS)

    Ivanyi, P.; Ivanyi, A.

    2015-02-01

    In this paper one column of a telescopic construction of a bell tower is investigated. The hinges at the support of the column and at the connecting joint between the upper and lower columns are modelled with rotational springs. The characteristics of the springs are assumed to be non-linear and the hysteresis property of them is represented with the Preisach hysteresis model. The mass of the columns and the bell with the fly are concentrated to the top of the column. The tolling process is simulated with a cycling load. The elements of the column are considered completely rigid. The time iteration of the non-linear equations of the motion is evaluated by the Crank-Nicolson schema and the implemented non-linear hysteresis is handled by the fix-point technique. The numerical simulation of the dynamic system is carried out under different combination of soft, medium and hard hysteresis properties of hinges.

  3. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; The Map and Related Decoding Algirithms

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.

  4. Inverse obstacle problem for the scalar Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Crosta, Giovanni F.

    1994-07-01

    The method presented is aimed at identifying the shape of an axially symmetric, sound soft acoustic scatterer from knowledge of the incident plane wave and of the scattering amplitude. The method relies on the approximate back propagation (ABP) of the estimated far field coefficients to the obstacle boundary and iteratively minimizes a boundary defect, without the addition of any penalty term. The ABP operator owes its structure to the properties of complete families of linearly independent solutions of Helmholtz equation. If the obstacle is known, as it happens in simulations, the theory also provides some independent means of predicting the performance of the ABP method. The ABP algorithm and the related computer code are outlined. Several reconstruction examples are considered, where noise is added to the estimated far field coefficients and other errors are deliberately introduced in the data. Many numerical and graphical results are provided.

  5. The inverted Batman incision: a new incision in transcolumellar incision for open rhinoplasty.

    PubMed

    Nakanishi, Yuji; Nagasao, Tomohisa; Shimizu, Yusuke; Miyamoto, Junpei; Fukuta, Keizo; Kishi, Kazuo

    2013-12-01

    Columellar and nostril shapes often present irregularity after transcolumellar incision for open rhinoplasty, because of the contracture of the incised wound. The present study introduces a new technique to prevent this complication, and verifies its efficacy in improving cosmetic appearance. In our new method, a zig-zag incision with three small triangular flaps is made on the columella and in the pericolumellar regions of the bilateral nostril rims. Since the shape of the incision resembles the contour of an inverted "batman", we term our new method the "Inverted Batman" incision. To verify the effectiveness of the Inverted Batman incision, aesthetic evaluation was conducted for 21 patients operated on using the conventional transcolumellar incision (Conventional Group) and 19 patients operated on using the Inverted Batman incision (Inverted Batman Group). The evaluation was performed by three plastic surgeons, using a four-grade scale to assess three separate items: symmetry of bilateral soft triangles, symmetry of bilateral margins of the columella, and evenness of the columellar surface. The scores of the two groups for these three items were compared using a non-parametric test (Mann-Whitney U-test). With all three items, the Inverted Batman group patients present higher scores than Conventional Group patients. The Inverted Batman incision is effective in preserving the correct anatomical structure of the columella, soft triangle, and nostril rims. Hence, we recommend the Inverted Batman incision as a useful technique for open rhinoplasty.

  6. Assessing astronaut injury potential from suit connectors using a human body finite element model.

    PubMed

    Danelson, Kerry A; Bolte, John H; Stitzel, Joel D

    2011-02-01

    The new Orion space capsule requires additional consideration of possible injury during landing due to the dynamic nature of the impact. The purpose of this parametric study was to determine changes in the injury response of a human body finite element model with a suit connector (SC). The possibility of thoracic bony injury, thoracic soft tissue injury, and femur injury were assessed in 24 different model configurations. These simulations had two SC placements and two SC types, a 2.27-kg rectangular and a 3.17-kg circular SC. A baseline model was tested with the same acceleration pulses and no SC for comparison. Further simulations were conducted to determine the protective effect of SC location changes and adding small and large rigid chest plates. The possibilities of rib, chest soft tissue, and femur injury were evaluated using sternal deflection, chest deflection, viscous criterion, and strain values. The results indicated a higher likelihood of chest injury than femur injury. The mean first principal strain in the femur was 0.136 +/- 0.007%, which is well below the failure limit for cortical bone. The placement of chest plates had a protective effect and reduced the sternal deflection, chest deflection, and viscous criterion values. If possible, the SC should be placed on the thigh to minimize injury risk metrics. Chest plates appear to offer some protective value; therefore, a large rigid chest plate or similar countermeasure should be considered for chest SC placement.

  7. Fatigue Magnification Factors of Arc-Soft-Toe Bracket Joints

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Li, Huajun; Wang, Hongqing; Wang, Shuqing; Li, Dejiang; Li, Qun; Fang, Hui

    2018-06-01

    Arc-soft-toe bracket (ASTB), as a joint structure in the marine structure, is the hot spot with significant stress concentration, therefore, fatigue behavior of ASTBs is an important point of concern in their design. Since macroscopic geometric factors obviously influence the stress flaws in joints, the shapes and sizes of ASTBs should represent the stress distribution around cracks in the hot spots. In this paper, we introduce a geometric magnification factor for reflecting the macroscopic geometric effects of ASTB crack features and construct a 3D finite element model to simulate the distribution of stress intensity factor (SIF) at the crack endings. Sensitivity analyses with respect to the geometric ratio H t / L b , R/ L b , L t / L b are performed, and the relations between the geometric factor and these parameters are presented. A set of parametric equations with respect to the geometric magnification factor is obtained using a curve fitting technique. A nonlinear relationship exists between the SIF and the ratio of ASTB arm to toe length. When the ratio of ASTB arm to toe length reaches a marginal value, the SIF of crack at the ASTB toe is not influenced by ASTB geometric parameters. In addition, the arc shape of the ASTB slope edge can transform the stress flowing path, which significantly affects the SIF at the ASTB toe. A proper method to reduce stress concentration is setting a slope edge arc size equal to the ASTB arm length.

  8. New generation attosecond light sources

    NASA Astrophysics Data System (ADS)

    Chang, Zenghu

    2017-04-01

    Millijoule level, few-cycle, carrier-envelope phase (CEP) stable Ti:Sapphire lasers centered at 800 nm have been the workhorse for the first generation attosecond light sources in the last 16 years. The spectral range of isolated attosecond pulses with sufficient photon flux for time-resolved pump-probe experiments has been limited to extreme ultraviolet (10 to 150 eV). The shortest pulses achieved are 67 as. It was demonstrated in 2001 that the cutoff photon energy of the high harmonic spectrum could be extended by increasing the center wavelength of the driving lasers. In recent years, mJ level, two-cycle, carrier-envelope phase stabilized lasers at 1.6 to 2.1 micron have been developed by implementing Optical Parametric Chirped Pulse Amplification (OPCPA) techniques. Recently, when long wavelength driving was combined with polarization gating, isolated soft x-rays in the water window (280-530 eV) were generated in our laboratory. The number of x-ray photons in the 120-400 eV range is comparable to that generated with Ti:Sapphire lasers in the 50 to 150 eV range. The ultrabroadband isolated x-ray pulses with 53 as duration were characterized by attosecond streaking measurements. The new generation attosecond soft X-ray sources open the door for studying electron dynamics with element specificity through core to valence transitions. NSF (1068604), ARO (W911NF-14-1-0383), AFOSR (FA9550-15-1-0037, FA9550-16-1-0013), DARPA-PULSE (W31P4Q1310017).

  9. Influence of different restorative techniques on marginal seal of class II composite restorations

    PubMed Central

    RODRIGUES JUNIOR, Sinval Adalberto; PIN, Lúcio Fernando da Silva; MACHADO, Giovanna; DELLA BONA, Álvaro; DEMARCO, Flávio Fernando

    2010-01-01

    Objective To evaluate the gingival marginal seal in class II composite restorations using different restorative techniques. Material and Methods Class II box cavities were prepared in both proximal faces of 32 sound human third molars with gingival margins located in either enamel or dentin/cementum. Restorations were performed as follows: G1 (control): composite, conventional light curing technique; G2: composite, soft-start technique; G3: amalgam/composite association (amalcomp); and G4: resin-modified glass ionomer cement/ composite, open sandwich technique. The restored specimens were thermocycled. Epoxy resin replicas were made and coated for scanning electron microscopy examination. For microleakage evaluation, teeth were coated with nail polish and immersed in dye solution. Teeth were cut in 3 slices and dye penetration was recorded (mm), digitized and analyzed with Image Tool software. Microleakage data were analyzed statistically by non-parametric Kruskal-Wallis and Mann-Whitney tests. Results Leakage in enamel was lower than in dentin (p<0.001). G2 exhibited the lowest leakage values (p<0.05) in enamel margins, with no differences between the other groups. In dentin margins, groups G1 and G2 had similar behavior and both showed less leakage (p<0.05) than groups G3 and G4. SEM micrographs revealed different marginal adaptation patterns for the different techniques and for the different substrates. Conclusion The soft-start technique showed no leakage in enamel margins and produced similar values to those of the conventional (control) technique for dentin margins. PMID:20379680

  10. 3-D shape estimation of DNA molecules from stereo cryo-electron micro-graphs using a projection-steerable snake.

    PubMed

    Jacob, Mathews; Blu, Thierry; Vaillant, Cedric; Maddocks, John H; Unser, Michael

    2006-01-01

    We introduce a three-dimensional (3-D) parametric active contour algorithm for the shape estimation of DNA molecules from stereo cryo-electron micrographs. We estimate the shape by matching the projections of a 3-D global shape model with the micrographs; we choose the global model as a 3-D filament with a B-spline skeleton and a specified radial profile. The active contour algorithm iteratively updates the B-spline coefficients, which requires us to evaluate the projections and match them with the micrographs at every iteration. Since the evaluation of the projections of the global model is computationally expensive, we propose a fast algorithm based on locally approximating it by elongated blob-like templates. We introduce the concept of projection-steerability and derive a projection-steerable elongated template. Since the two-dimensional projections of such a blob at any 3-D orientation can be expressed as a linear combination of a few basis functions, matching the projections of such a 3-D template involves evaluating a weighted sum of inner products between the basis functions and the micrographs. The weights are simple functions of the 3-D orientation and the inner-products are evaluated efficiently by separable filtering. We choose an internal energy term that penalizes the average curvature magnitude. Since the exact length of the DNA molecule is known a priori, we introduce a constraint energy term that forces the curve to have this specified length. The sum of these energies along with the image energy derived from the matching process is minimized using the conjugate gradients algorithm. We validate the algorithm using real, as well as simulated, data and show that it performs well.

  11. Demonstration of a forward iterative method to reconstruct brachytherapy seed configurations from x-ray projections

    NASA Astrophysics Data System (ADS)

    Murphy, Martin J.; Todor, Dorin A.

    2005-06-01

    By monitoring brachytherapy seed placement and determining the actual configuration of the seeds in vivo, one can optimize the treatment plan during the process of implantation. Two or more radiographic images from different viewpoints can in principle allow one to reconstruct the configuration of implanted seeds uniquely. However, the reconstruction problem is complicated by several factors: (1) the seeds can overlap and cluster in the images; (2) the images can have distortion that varies with viewpoint when a C-arm fluoroscope is used; (3) there can be uncertainty in the imaging viewpoints; (4) the angular separation of the imaging viewpoints can be small owing to physical space constraints; (5) there can be inconsistency in the number of seeds detected in the images; and (6) the patient can move while being imaged. We propose and conceptually demonstrate a novel reconstruction method that handles all of these complications and uncertainties in a unified process. The method represents the three-dimensional seed and camera configurations as parametrized models that are adjusted iteratively to conform to the observed radiographic images. The morphed model seed configuration that best reproduces the appearance of the seeds in the radiographs is the best estimate of the actual seed configuration. All of the information needed to establish both the seed configuration and the camera model is derived from the seed images without resort to external calibration fixtures. Furthermore, by comparing overall image content rather than individual seed coordinates, the process avoids the need to establish correspondence between seed identities in the several images. The method has been shown to work robustly in simulation tests that simultaneously allow for unknown individual seed positions, uncertainties in the imaging viewpoints and variable image distortion.

  12. LOW-ENGINE-FRICTION TECHNOLOGY FOR ADVANCED NATURAL-GAS RECIPROCATING ENGINES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Victor Wong; Tian Tian; Luke Moughon

    2005-09-30

    This program aims at improving the efficiency of advanced natural-gas reciprocating engines (ANGRE) by reducing piston and piston ring assembly friction without major adverse effects on engine performance, such as increased oil consumption and wear. An iterative process of simulation, experimentation and analysis is being followed towards achieving the goal of demonstrating a complete optimized low-friction engine system. To date, a detailed set of piston and piston-ring dynamic and friction models have been developed and applied that illustrate the fundamental relationships between design parameters and friction losses. Low friction ring designs have already been recommended in a previous phase, withmore » full-scale engine validation partially completed. Current accomplishments include the addition of several additional power cylinder design areas to the overall system analysis. These include analyses of lubricant and cylinder surface finish and a parametric study of piston design. The Waukesha engine was found to be already well optimized in the areas of lubricant, surface skewness and honing cross-hatch angle, where friction reductions of 12% for lubricant, and 5% for surface characteristics, are projected. For the piston, a friction reduction of up to 50% may be possible by controlling waviness alone, while additional friction reductions are expected when other parameters are optimized. A total power cylinder friction reduction of 30-50% is expected, translating to an engine efficiency increase of two percentage points from its current baseline towards the goal of 50% efficiency. Key elements of the continuing work include further analysis and optimization of the engine piston design, in-engine testing of recommended lubricant and surface designs, design iteration and optimization of previously recommended technologies, and full-engine testing of a complete, optimized, low-friction power cylinder system.« less

  13. An eigenvalue approach for the automatic scaling of unknowns in model-based reconstructions: Application to real-time phase-contrast flow MRI.

    PubMed

    Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens

    2017-12-01

    The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.

    2018-06-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.

  15. HAPRAP: a haplotype-based iterative method for statistical fine mapping using GWAS summary statistics.

    PubMed

    Zheng, Jie; Rodriguez, Santiago; Laurin, Charles; Baird, Denis; Trela-Larsen, Lea; Erzurumluoglu, Mesut A; Zheng, Yi; White, Jon; Giambartolomei, Claudia; Zabaneh, Delilah; Morris, Richard; Kumari, Meena; Casas, Juan P; Hingorani, Aroon D; Evans, David M; Gaunt, Tom R; Day, Ian N M

    2017-01-01

    Fine mapping is a widely used approach for identifying the causal variant(s) at disease-associated loci. Standard methods (e.g. multiple regression) require individual level genotypes. Recent fine mapping methods using summary-level data require the pairwise correlation coefficients ([Formula: see text]) of the variants. However, haplotypes rather than pairwise [Formula: see text], are the true biological representation of linkage disequilibrium (LD) among multiple loci. In this article, we present an empirical iterative method, HAPlotype Regional Association analysis Program (HAPRAP), that enables fine mapping using summary statistics and haplotype information from an individual-level reference panel. Simulations with individual-level genotypes show that the results of HAPRAP and multiple regression are highly consistent. In simulation with summary-level data, we demonstrate that HAPRAP is less sensitive to poor LD estimates. In a parametric simulation using Genetic Investigation of ANthropometric Traits height data, HAPRAP performs well with a small training sample size (N < 2000) while other methods become suboptimal. Moreover, HAPRAP's performance is not affected substantially by single nucleotide polymorphisms (SNPs) with low minor allele frequencies. We applied the method to existing quantitative trait and binary outcome meta-analyses (human height, QTc interval and gallbladder disease); all previous reported association signals were replicated and two additional variants were independently associated with human height. Due to the growing availability of summary level data, the value of HAPRAP is likely to increase markedly for future analyses (e.g. functional prediction and identification of instruments for Mendelian randomization). The HAPRAP package and documentation are available at http://apps.biocompute.org.uk/haprap/ CONTACT: : jie.zheng@bristol.ac.uk or tom.gaunt@bristol.ac.ukSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  16. Combined use of iterative reconstruction and monochromatic imaging in spinal fusion CT images.

    PubMed

    Wang, Fengdan; Zhang, Yan; Xue, Huadan; Han, Wei; Yang, Xianda; Jin, Zhengyu; Zwar, Richard

    2017-01-01

    Spinal fusion surgery is an important procedure for treating spinal diseases and computed tomography (CT) is a critical tool for postoperative evaluation. However, CT image quality is considerably impaired by metal artifacts and image noise. To explore whether metal artifacts and image noise can be reduced by combining two technologies, adaptive statistical iterative reconstruction (ASIR) and monochromatic imaging generated by gemstone spectral imaging (GSI) dual-energy CT. A total of 51 patients with 318 spinal pedicle screws were prospectively scanned by dual-energy CT using fast kV-switching GSI between 80 and 140 kVp. Monochromatic GSI images at 110 keV were reconstructed either without or with various levels of ASIR (30%, 50%, 70%, and 100%). The quality of five sets of images was objectively and subjectively assessed. With objective image quality assessment, metal artifacts decreased when increasing levels of ASIR were applied (P < 0.001). Moreover, adding ASIR to GSI also decreased image noise (P < 0.001) and improved the signal-to-noise ratio (P < 0.001). The subjective image quality analysis showed good inter-reader concordance, with intra-class correlation coefficients between 0.89 and 0.99. The visualization of peri-implant soft tissue was improved at higher ASIR levels (P < 0.001). Combined use of ASIR and GSI decreased image noise and improved image quality in post-spinal fusion CT scans. Optimal results were achieved with ASIR levels ≥70%. © The Foundation Acta Radiologica 2016.

  17. The dark side of electroweak naturalness beyond the MSSM

    NASA Astrophysics Data System (ADS)

    Bélanger, Geneviève; Delaunay, Cédric; Goudelis, Andreas

    2015-04-01

    Weak scale supersymmetry (SUSY) remains a prime explanation for the radiative stability of the Higgs field. A natural account of the Higgs boson mass, however, strongly favors extensions of the Minimal Supersymmetric Standard Model (MSSM). A plausible option is to introduce a new supersymmetric sector coupled to the MSSM Higgs fields, whose associated states resolve the little hierarchy problem between the third generation soft parameters and the weak scale. SUSY also accomodates a weakly interacting cold dark matter (DM) candidate in the form of a stable neutralino. In minimal realizations, the thus-far null results of direct DM searches, along with the DM relic abundance constraint, introduce a level of fine-tuning as severe as the one due to the SUSY little hierarchy problem. We analyse the generic implications of new SUSY sectors parametrically heavier than the minimal SUSY spectrum, devised to increase the Higgs boson mass, on this "little neutralino DM problem". We focus on the SUSY operator of smallest scaling dimension in an effective field theory description, which modifies the Higgs and DM sectors in a correlated manner. Within this framework, we show that recent null results from the LUX experiment imply a tree-level fine-tuning for gaugino DM which is parametrically at least a few times larger than that of the MSSM. Higgsino DM whose relic abundance is generated through a thermal freeze-out mechanism remains also severely fine-tuned, unless the DM lies below the weak boson pair-production threshold. As in the MSSM, well-tempered gaugino-Higgsino DM is strongly disfavored by present direct detection results.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rasouli, C.; Abbasi Davani, F.; Rokrok, B.

    Plasma confinement using external magnetic field is one of the successful ways leading to the controlled nuclear fusion. Development and validation of the solution process for plasma equilibrium in the experimental toroidal fusion devices is the main subject of this work. Solution of the nonlinear 2D stationary problem as posed by the Grad-Shafranov equation gives quantitative information about plasma equilibrium inside the vacuum chamber of hot fusion devices. This study suggests solving plasma equilibrium equation which is essential in toroidal nuclear fusion devices, using a mesh-free method in a condition that the plasma boundary is unknown. The Grad-Shafranov equation hasmore » been solved numerically by the point interpolation collocation mesh-free method. Important features of this approach include truly mesh free, simple mathematical relationships between points and acceptable precision in comparison with the parametric results. The calculation process has been done by using the regular and irregular nodal distribution and support domains with different points. The relative error between numerical and analytical solution is discussed for several test examples such as small size Damavand tokamak, ITER-like equilibrium, NSTX-like equilibrium, and typical Spheromak.« less

  19. Quantification of Dynamic [18F]FDG Pet Studies in Acute Lung Injury.

    PubMed

    Grecchi, Elisabetta; Veronese, Mattia; Moresco, Rosa Maria; Bellani, Giacomo; Pesenti, Antonio; Messa, Cristina; Bertoldo, Alessandra

    2016-02-01

    This work aims to investigate lung glucose metabolism using 2-deoxy-2-[(18)F]fluoro-D-glucose ([(18)F]FDG) positron emission tomography (PET) imaging in acute lung injury (ALI) patients. Eleven ALI patients and five healthy controls underwent a dynamic [(18)F]FDG PET/X-ray computed tomography (CT) scan. The standardized uptake values (SUV) and three different methods for the quantification of glucose metabolism (i.e., ratio, Patlak, and spectral analysis iterative filter, SAIF) were applied both at the region and the voxel levels. SUV reported a lower correlation than the ratio with the net tracer uptake. Patlak and SAIF analyses did not show any significant spatial or quantitative (R(2) > 0.80) difference. The additional information provided by SAIF showed that in lung inflammation, elevated tracer uptake is coupled with abnormal tracer exchanges within and between lung tissue compartments. Full kinetic modeling provides a multi-parametric description of glucose metabolism in the lungs. This allows characterizing the spatial distribution of lung inflammation as well as returning the functional state of the tissues.

  20. Calculating stage duration statistics in multistage diseases.

    PubMed

    Komarova, Natalia L; Thalhauser, Craig J

    2011-01-01

    Many human diseases are characterized by multiple stages of progression. While the typical sequence of disease progression can be identified, there may be large individual variations among patients. Identifying mean stage durations and their variations is critical for statistical hypothesis testing needed to determine if treatment is having a significant effect on the progression, or if a new therapy is showing a delay of progression through a multistage disease. In this paper we focus on two methods for extracting stage duration statistics from longitudinal datasets: an extension of the linear regression technique, and a counting algorithm. Both are non-iterative, non-parametric and computationally cheap methods, which makes them invaluable tools for studying the epidemiology of diseases, with a goal of identifying different patterns of progression by using bioinformatics methodologies. Here we show that the regression method performs well for calculating the mean stage durations under a wide variety of assumptions, however, its generalization to variance calculations fails under realistic assumptions about the data collection procedure. On the other hand, the counting method yields reliable estimations for both means and variances of stage durations. Applications to Alzheimer disease progression are discussed.

  1. The successive projection algorithm as an initialization method for brain tumor segmentation using non-negative matrix factorization.

    PubMed

    Sauwen, Nicolas; Acou, Marjan; Bharath, Halandur N; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Van Huffel, Sabine

    2017-01-01

    Non-negative matrix factorization (NMF) has become a widely used tool for additive parts-based analysis in a wide range of applications. As NMF is a non-convex problem, the quality of the solution will depend on the initialization of the factor matrices. In this study, the successive projection algorithm (SPA) is proposed as an initialization method for NMF. SPA builds on convex geometry and allocates endmembers based on successive orthogonal subspace projections of the input data. SPA is a fast and reproducible method, and it aligns well with the assumptions made in near-separable NMF analyses. SPA was applied to multi-parametric magnetic resonance imaging (MRI) datasets for brain tumor segmentation using different NMF algorithms. Comparison with common initialization methods shows that SPA achieves similar segmentation quality and it is competitive in terms of convergence rate. Whereas SPA was previously applied as a direct endmember extraction tool, we have shown improved segmentation results when using SPA as an initialization method, as it allows further enhancement of the sources during the NMF iterative procedure.

  2. Simultaneous Deployment and Tracking Multi-Robot Strategies with Connectivity Maintenance

    PubMed Central

    Tardós, Javier; Aragues, Rosario; Sagüés, Carlos; Rubio, Carlos

    2018-01-01

    Multi-robot teams composed of ground and aerial vehicles have gained attention during the last few years. We present a scenario where both types of robots must monitor the same area from different view points. In this paper, we propose two Lloyd-based tracking strategies to allow the ground robots (agents) to follow the aerial ones (targets), keeping the connectivity between the agents. The first strategy establishes density functions on the environment so that the targets acquire more importance than other zones, while the second one iteratively modifies the virtual limits of the working area depending on the positions of the targets. We consider the connectivity maintenance due to the fact that coverage tasks tend to spread the agents as much as possible, which is addressed by restricting their motions so that they keep the links of a minimum spanning tree of the communication graph. We provide a thorough parametric study of the performance of the proposed strategies under several simulated scenarios. In addition, the methods are implemented and tested using realistic robotic simulation environments and real experiments. PMID:29558446

  3. Performance index for virtual reality phacoemulsification surgery

    NASA Astrophysics Data System (ADS)

    Söderberg, Per; Laurell, Carl-Gustaf; Simawi, Wamidh; Skarman, Eva; Nordqvist, Per; Nordh, Leif

    2007-02-01

    We have developed a virtual reality (VR) simulator for phacoemulsification (phaco) surgery. The current work aimed at developing a performance index that characterizes the performance of an individual trainee. We recorded measurements of 28 response variables during three iterated surgical sessions in 9 subjects naive to cataract surgery and 6 experienced cataract surgeons, separately for the sculpting phase and the evacuation phase of phacoemulsification surgery. We further defined a specific performance index for a specific measurement variable and a total performance index for a specific trainee. The distribution function for the total performance index was relatively evenly distributed both for the sculpting and the evacuation phase indicating that parametric statistics can be used for comparison of total average performance indices for different groups in the future. The current total performance index for an individual considers all measurement variables included with the same weight. It is possible that a future development of the system will indicate that a better characterization of a trainee can be obtained if the various measurements variables are given specific weights. The currently developed total performance index for a trainee is statistically an independent observation of that particular trainee.

  4. Lithium D-cell study

    NASA Technical Reports Server (NTRS)

    Size, P.; Takeuchi, Esther S.

    1993-01-01

    The purpose of this contract is to evaluate parametrically the effects of various factors including the electrolyte type, electrolyte concentration, depolarizer type, and cell configuration on lithium cell electrical performance and safety. This effort shall allow for the selection and optimization of cell design for future NASA applications while maintaining close ties with WGL's continuous improvements in manufacturing processes and lithium cell design. Taguchi experimental design techniques are employed in this task, and allow for a maximum amount of information to be obtained while requiring significantly less cells than if a full factorial design were employed. Acceptance testing for this task is modeled after the NASA Document EP5-83-025, Revision C, for cell weights, OCV's and load voltages. The performance attributes that are studied in this effort are fresh capacity and start-up characteristics evaluated at two rates and two temperatures, shelf-life characteristics including start-up and capacity retention, and iterative microcalorimetry measurements. Abuse testing includes forced over discharge at two rates with and without diode protection, temperature tolerance testing, and shorting tests at three rates with the measurement of heat generated during shorting conditions.

  5. Semi-analytical modelling of positive corona discharge in air

    NASA Astrophysics Data System (ADS)

    Pontiga, Francisco; Yanallah, Khelifa; Chen, Junhong

    2013-09-01

    Semianalytical approximate solutions of the spatial distribution of electric field and electron and ion densities have been obtained by solving Poisson's equations and the continuity equations for the charged species along the Laplacian field lines. The need to iterate for the correct value of space charge on the corona electrode has been eliminated by using the corona current distribution over the grounded plane derived by Deutsch, which predicts a cos m θ law similar to Warburg's law. Based on the results of the approximated model, a parametric study of the influence of gas pressure, the corona wire radius, and the inter-electrode wire-plate separation has been carried out. Also, the approximate solutions of the electron number density has been combined with a simplified plasma chemistry model in order to compute the ozone density generated by the corona discharge in the presence of a gas flow. This work was supported by the Consejeria de Innovacion, Ciencia y Empresa (Junta de Andalucia) and by the Ministerio de Ciencia e Innovacion, Spain, within the European Regional Development Fund contracts FQM-4983 and FIS2011-25161.

  6. Reflectance Estimation from Urban Terrestrial Images: Validation of a Symbolic Ray-Tracing Method on Synthetic Data

    NASA Astrophysics Data System (ADS)

    Coubard, F.; Brédif, M.; Paparoditis, N.; Briottet, X.

    2011-04-01

    Terrestrial geolocalized images are nowadays widely used on the Internet, mainly in urban areas, through immersion services such as Google Street View. On the long run, we seek to enhance the visualization of these images; for that purpose, radiometric corrections must be performed to free them from illumination conditions at the time of acquisition. Given the simultaneously acquired 3D geometric model of the scene with LIDAR or vision techniques, we face an inverse problem where the illumination and the geometry of the scene are known and the reflectance of the scene is to be estimated. Our main contribution is the introduction of a symbolic ray-tracing rendering to generate parametric images, for quick evaluation and comparison with the acquired images. The proposed approach is then based on an iterative estimation of the reflectance parameters of the materials, using a single rendering pre-processing. We validate the method on synthetic data with linear BRDF models and discuss the limitations of the proposed approach with more general non-linear BRDF models.

  7. Power flow prediction in vibrating systems via model reduction

    NASA Astrophysics Data System (ADS)

    Li, Xianhui

    This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.

  8. PaLaCe: A Coarse-Grain Protein Model for Studying Mechanical Properties.

    PubMed

    Pasi, Marco; Lavery, Richard; Ceres, Nicoletta

    2013-01-08

    We present a coarse-grain protein model PaLaCe (Pasi-Lavery-Ceres) that has been developed principally to allow fast computational studies of protein mechanics and to clarify the links between mechanics and function. PaLaCe uses a two-tier protein representation with one to three pseudoatoms representing each amino acid for the main nonbonded interactions, combined with atomic-scale peptide groups and some side chain atoms to allow the explicit representation of backbone hydrogen bonds and to simplify the treatment of bonded interactions. The PaLaCe force field is composed of physics-based terms, parametrized using Boltzmann inversion of conformational probability distributions derived from a protein structure data set, and iteratively refined to reproduce the experimental distributions. PaLaCe has been implemented in the MMTK simulation package and can be used for energy minimization, normal mode calculations, and molecular or stochastic dynamics. We present simulations with PaLaCe that test its ability to maintain stable structures for folded proteins, reproduce their dynamic fluctuations, and correctly model large-scale, force-induced conformational changes.

  9. A k-space method for large-scale models of wave propagation in tissue.

    PubMed

    Mast, T D; Souriau, L P; Liu, D L; Tabei, M; Nachman, A I; Waag, R C

    2001-03-01

    Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths; most current two-dimensional methods, such as finite-difference and finite-element methods, are unable to compute propagation on this scale with the efficiency needed for imaging studies. Furthermore, for most available methods of simulating ultrasonic propagation, large-scale, three-dimensional computations of ultrasonic scattering are infeasible. Some of these difficulties have been overcome by previous pseudospectral and k-space methods, which allow substantial portions of the necessary computations to be executed using fast Fourier transforms. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x) < or = c0) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method. However, numerical results also indicate that the k-space method is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties, although qualitative results can still be obtained by the k-space method with high efficiency. Possible extensions to the method, including representation of absorption effects, absorbing boundary conditions, elastic-wave propagation, and acoustic nonlinearity, are discussed.

  10. Parametric and non-parametric modeling of short-term synaptic plasticity. Part I: computational study

    PubMed Central

    Marmarelis, Vasilis Z.; Berger, Theodore W.

    2009-01-01

    Parametric and non-parametric modeling methods are combined to study the short-term plasticity (STP) of synapses in the central nervous system (CNS). The nonlinear dynamics of STP are modeled by means: (1) previously proposed parametric models based on mechanistic hypotheses and/or specific dynamical processes, and (2) non-parametric models (in the form of Volterra kernels) that transforms the presynaptic signals into postsynaptic signals. In order to synergistically use the two approaches, we estimate the Volterra kernels of the parametric models of STP for four types of synapses using synthetic broadband input–output data. Results show that the non-parametric models accurately and efficiently replicate the input–output transformations of the parametric models. Volterra kernels provide a general and quantitative representation of the STP. PMID:18506609

  11. A GPU based high-resolution multilevel biomechanical head and neck model for validating deformable image registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neylon, J., E-mail: jneylon@mednet.ucla.edu; Qi, X.; Sheng, K.

    Purpose: Validating the usage of deformable image registration (DIR) for daily patient positioning is critical for adaptive radiotherapy (RT) applications pertaining to head and neck (HN) radiotherapy. The authors present a methodology for generating biomechanically realistic ground-truth data for validating DIR algorithms for HN anatomy by (a) developing a high-resolution deformable biomechanical HN model from a planning CT, (b) simulating deformations for a range of interfraction posture changes and physiological regression, and (c) generating subsequent CT images representing the deformed anatomy. Methods: The biomechanical model was developed using HN kVCT datasets and the corresponding structure contours. The voxels inside amore » given 3D contour boundary were clustered using a graphics processing unit (GPU) based algorithm that accounted for inconsistencies and gaps in the boundary to form a volumetric structure. While the bony anatomy was modeled as rigid body, the muscle and soft tissue structures were modeled as mass–spring-damper models with elastic material properties that corresponded to the underlying contoured anatomies. Within a given muscle structure, the voxels were classified using a uniform grid and a normalized mass was assigned to each voxel based on its Hounsfield number. The soft tissue deformation for a given skeletal actuation was performed using an implicit Euler integration with each iteration split into two substeps: one for the muscle structures and the other for the remaining soft tissues. Posture changes were simulated by articulating the skeletal structure and enabling the soft structures to deform accordingly. Physiological changes representing tumor regression were simulated by reducing the target volume and enabling the surrounding soft structures to deform accordingly. Finally, the authors also discuss a new approach to generate kVCT images representing the deformed anatomy that accounts for gaps and antialiasing artifacts that may be caused by the biomechanical deformation process. Accuracy and stability of the model response were validated using ground-truth simulations representing soft tissue behavior under local and global deformations. Numerical accuracy of the HN deformations was analyzed by applying nonrigid skeletal transformations acquired from interfraction kVCT images to the model’s skeletal structures and comparing the subsequent soft tissue deformations of the model with the clinical anatomy. Results: The GPU based framework enabled the model deformation to be performed at 60 frames/s, facilitating simulations of posture changes and physiological regressions at interactive speeds. The soft tissue response was accurate with a R{sup 2} value of >0.98 when compared to ground-truth global and local force deformation analysis. The deformation of the HN anatomy by the model agreed with the clinically observed deformations with an average correlation coefficient of 0.956. For a clinically relevant range of posture and physiological changes, the model deformations stabilized with an uncertainty of less than 0.01 mm. Conclusions: Documenting dose delivery for HN radiotherapy is essential accounting for posture and physiological changes. The biomechanical model discussed in this paper was able to deform in real-time, allowing interactive simulations and visualization of such changes. The model would allow patient specific validations of the DIR method and has the potential to be a significant aid in adaptive radiotherapy techniques.« less

  12. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  13. A review of parametric approaches specific to aerodynamic design process

    NASA Astrophysics Data System (ADS)

    Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li

    2018-04-01

    Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.

  14. COMPASS Final Report: Low Cost Robotic Lunar Lander

    NASA Technical Reports Server (NTRS)

    McGuire, Melissa L.; Oleson, Steven R.

    2010-01-01

    The COllaborative Modeling for the Parametric Assessment of Space Systems (COMPASS) team designed a robotic lunar Lander to deliver an unspecified payload (greater than zero) to the lunar surface for the lowest cost in this 2006 design study. The purpose of the low cost lunar lander design was to investigate how much payload can an inexpensive chemical or Electric Propulsion (EP) system deliver to the Moon s surface. The spacecraft designed as the baseline out of this study was a solar powered robotic lander, launched on a Minotaur V launch vehicle on a direct injection trajectory to the lunar surface. A Star 27 solid rocket motor does lunar capture and performs 88 percent of the descent burn. The Robotic Lunar Lander soft-lands using a hydrazine propulsion system to perform the last 10% of the landing maneuver, leaving the descent at a near zero, but not exactly zero, terminal velocity. This low-cost robotic lander delivers 10 kg of science payload instruments to the lunar surface.

  15. A test of Hořava gravity: the dark energy

    NASA Astrophysics Data System (ADS)

    Park, Mu-In

    2010-01-01

    Recently Hořava proposed a renormalizable gravity theory with higher spatial derivatives in four dimensions which reduces to Einstein gravity with a non-vanishing cosmological constant in IR but with improved UV behaviors. Here, I consider a non-trivial test of the new gravity theory in FRW universe by considering an IR modification which breaks ``softly'' the detailed balance condition in the original Hořava model. I separate the dark energy parts from the usual Einstein gravity parts in the Friedman equations and obtain the formula of the equations of state parameter. The IR modified Hořava gravity seems to be consistent with the current observational data but we need some more refined data sets to see whether the theory is really consistent with our universe. From the consistency of our theory, I obtain some constraints on the allowed values of w0 and wa in the Chevallier, Polarski, and Linder's parametrization and this may be tested in the near future, by sharpening the data sets.

  16. The linear -- non-linear frontier for the Goldstone Higgs

    DOE PAGES

    Gavela, M. B.; Kanshin, K.; Machado, P. A. N.; ...

    2016-12-01

    The minimalmore » $SO(5)/SO(4)$ sigma model is used as a template for the ultraviolet completion of scenarios in which the Higgs particle is a low-energy remnant of some high-energy dynamics, enjoying a (pseudo) Nambu-Goldstone boson ancestry. Varying the $$\\sigma$$ mass allows to sweep from the perturbative regime to the customary non-linear implementations. The low-energy benchmark effective non-linear Lagrangian for bosons and fermions is obtained, determining as well the operator coefficients including linear corrections. At first order in the latter, three effective bosonic operators emerge which are independent of the explicit soft breaking assumed. The Higgs couplings to vector bosons and fermions turn out to be quite universal: the linear corrections are proportional to the explicit symmetry breaking parameters. Furthermore, we define an effective Yukawa operator which allows a simple parametrization and comparison of different heavy fermion ultraviolet completions. In addition, one particular fermionic completion is explored in detail, obtaining the corresponding leading low-energy fermionic operators.« less

  17. Analytical Model of the Nonlinear Dynamics of Cantilever Tip-Sample Surface Interactions for Various Acoustic-Atomic Force Microscopies

    NASA Technical Reports Server (NTRS)

    Cantrell, John H., Jr.; Cantrell, Sean A.

    2008-01-01

    A comprehensive analytical model of the interaction of the cantilever tip of the atomic force microscope (AFM) with the sample surface is developed that accounts for the nonlinearity of the tip-surface interaction force. The interaction is modeled as a nonlinear spring coupled at opposite ends to linear springs representing cantilever and sample surface oscillators. The model leads to a pair of coupled nonlinear differential equations that are solved analytically using a standard iteration procedure. Solutions are obtained for the phase and amplitude signals generated by various acoustic-atomic force microscope (A-AFM) techniques including force modulation microscopy, atomic force acoustic microscopy, ultrasonic force microscopy, heterodyne force microscopy, resonant difference-frequency atomic force ultrasonic microscopy (RDF-AFUM), and the commonly used intermittent contact mode (TappingMode) generally available on AFMs. The solutions are used to obtain a quantitative measure of image contrast resulting from variations in the Young modulus of the sample for the amplitude and phase images generated by the A-AFM techniques. Application of the model to RDF-AFUM and intermittent soft contact phase images of LaRC-cp2 polyimide polymer is discussed. The model predicts variations in the Young modulus of the material of 24 percent from the RDF-AFUM image and 18 percent from the intermittent soft contact image. Both predictions are in good agreement with the literature value of 21 percent obtained from independent, macroscopic measurements of sheet polymer material.

  18. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    NASA Astrophysics Data System (ADS)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  19. Classical and all-floating FETI methods for the simulation of arterial tissues

    PubMed Central

    Augustin, Christoph M.; Holzapfel, Gerhard A.; Steinbach, Olaf

    2015-01-01

    High-resolution and anatomically realistic computer models of biological soft tissues play a significant role in the understanding of the function of cardiovascular components in health and disease. However, the computational effort to handle fine grids to resolve the geometries as well as sophisticated tissue models is very challenging. One possibility to derive a strongly scalable parallel solution algorithm is to consider finite element tearing and interconnecting (FETI) methods. In this study we propose and investigate the application of FETI methods to simulate the elastic behavior of biological soft tissues. As one particular example we choose the artery which is – as most other biological tissues – characterized by anisotropic and nonlinear material properties. We compare two specific approaches of FETI methods, classical and all-floating, and investigate the numerical behavior of different preconditioning techniques. In comparison to classical FETI, the all-floating approach has not only advantages concerning the implementation but in many cases also concerning the convergence of the global iterative solution method. This behavior is illustrated with numerical examples. We present results of linear elastic simulations to show convergence rates, as expected from the theory, and results from the more sophisticated nonlinear case where we apply a well-known anisotropic model to the realistic geometry of an artery. Although the FETI methods have a great applicability on artery simulations we will also discuss some limitations concerning the dependence on material parameters. PMID:26751957

  20. Iterative current mode per pixel ADC for 3D SoftChip implementation in CMOS

    NASA Astrophysics Data System (ADS)

    Lachowicz, Stefan W.; Rassau, Alexander; Lee, Seung-Minh; Eshraghian, Kamran; Lee, Mike M.

    2003-04-01

    Mobile multimedia communication has rapidly become a significant area of research and development constantly challenging boundaries on a variety of technological fronts. The processing requirements for the capture, conversion, compression, decompression, enhancement, display, etc. of increasingly higher quality multimedia content places heavy demands even on current ULSI (ultra large scale integration) systems, particularly for mobile applications where area and power are primary considerations. The ADC presented in this paper is designed for a vertically integrated (3D) system comprising two distinct layers bonded together using Indium bump technology. The top layer is a CMOS imaging array containing analogue-to-digital converters, and a buffer memory. The bottom layer takes the form of a configurable array processor (CAP), a highly parallel array of soft programmable processors capable of carrying out complex processing tasks directly on data stored in the top plane. This paper presents a ADC scheme for the image capture plane. The analogue photocurrent or sampled voltage is transferred to the ADC via a column or a column/row bus. In the proposed system, an array of analogue-to-digital converters is distributed, so that a one-bit cell is associated with one sensor. The analogue-to-digital converters are algorithmic current-mode converters. Eight such cells are cascaded to form an 8-bit converter. Additionally, each photo-sensor is equipped with a current memory cell, and multiple conversions are performed with scaled values of the photocurrent for colour processing.

  1. Rephasing invariant parametrization of flavor mixing

    NASA Astrophysics Data System (ADS)

    Lee, Tae-Hun

    A new rephasing invariant parametrization for the 3 x 3 CKM matrix, called (x, y) parametrization, is introduced and the properties and applications of the parametrization are discussed. The overall phase condition leads this parametrization to have only six rephsing invariant parameters and two constraints. Its simplicity and regularity become apparent when it is applied to the one-loop RGE (renormalization group equations) for the Yukawa couplings. The implications of this parametrization for unification of the Yukawa couplings are also explored.

  2. Combinatorial screening of 3D biomaterial properties that promote myofibrogenesis for mesenchymal stromal cell-based heart valve tissue engineering.

    PubMed

    Usprech, Jenna; Romero, David A; Amon, Cristina H; Simmons, Craig A

    2017-08-01

    The physical and chemical properties of a biomaterial integrate with soluble cues in the cell microenvironment to direct cell fate and function. Predictable biomaterial-based control of integrated cell responses has been investigated with two-dimensional (2D) screening platforms, but integrated responses in 3D have largely not been explored systematically. To address this need, we developed a screening platform using polyethylene glycol norbornene (PEG-NB) as a model biomaterial with which the polymer wt% (to control elastic modulus) and adhesion peptide types (RGD, DGEA, YIGSR) and densities could be controlled independently and combinatorially in arrays of 3D hydrogels. We applied this platform and regression modeling to identify combinations of biomaterial and soluble biochemical (TGF-β1) factors that best promoted myofibrogenesis of human mesenchymal stromal cells (hMSCs) in order to inform our understanding of regenerative processes for heart valve tissue engineering. In contrast to 2D culture, our screens revealed that soft hydrogels (low PEG-NB wt%) best promoted spread myofibroblastic cells that expressed high levels of α-smooth muscle actin (α-SMA) and collagen type I. High concentrations of RGD enhanced α-SMA expression in the presence of TGF-β1 and cell spreading regardless of whether TGF-β1 was in the culture medium. Strikingly, combinations of peptides that maximized collagen expression depended on the presence or absence of TGF-β1, indicating that biomaterial properties can modulate MSC response to soluble signals. This combination of a 3D biomaterial array screening platform with statistical modeling is broadly applicable to systematically identify combinations of biomaterial and microenvironmental conditions that optimally guide cell responses. We present a novel screening platform and methodology to model and identify how combinations of biomaterial and microenvironmental conditions guide cell phenotypes in 3D. Our approach to systematically identify complex relationships between microenvironmental cues and cell responses enables greater predictive power over cell fate in conditions with interacting material design factors. We demonstrate that this approach not only predicts that mesenchymal stromal cell (MSC) myofibrogenesis is promoted by soft, porous 3D biomaterials, but also generated new insights which demonstrate how biomaterial properties can differentially modulate MSC response to soluble signals. An additional benefit of the process includes utilizing both parametric and non parametric analyses which can demonstrate dominant significant trends as well as subtle interactions between biochemical and biomaterial cues. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  3. Comparison between multi-channel LDV and PWI for measurement of pulse wave velocity in distensible tubes: Towards a new diagnostic technique for detection of arteriosclerosis

    NASA Astrophysics Data System (ADS)

    Campo, Adriaan; Dudzik, Grzegorz; Apostolakis, Jason; Waz, Adam; Nauleau, Pierre; Abramski, Krzysztof; Dirckx, Joris; Konofagou, Elisa

    2017-10-01

    The aim of this work, was to compare pulse wave velocity (PWV) measurements using Laser Doppler vibrometry (LDV) and the more established ultrasound-based pulse wave imaging (PWI) in smooth vessels. Additionally, it was tested whether changes in phantom structure can be detected using LDV in vessels containing a local hardening of the vessel wall. Results from both methods showed good agreement illustrated by the non-parametric Spearman correlation analysis (Spearman-ρ = 1 and p< 0.05) and the Bland-Altman analysis (mean bias of -0.63 m/s and limits of agreement between -0.35 and -0.90 m/s). The PWV in soft phantoms as measured with LDV was 1.30±0.40 m/s and the PWV in stiff phantoms was 3.6±1.4 m/s. The PWV values in phantoms with inclusions were in between those of soft and stiff phantoms. However, using LDV, given the low number of measurement beams, the exact locations of inclusions could not be determined, and the PWV in the inclusions could not be measured. In conclusion, this study indicates that the PWV as measured with PWI is in good agreement with the PWV measured with LDV although the latter technique has lower spatial resolution, fewer markers and larger distances between beams. In further studies, more LDV beams will be used to allow detection of local changes in arterial wall dynamics due to e.g. small inclusions or local hardenings of the vessel wall.

  4. Sci-Thur PM - Colourful Interactions: Highlights 04: A Fast Quantitative MRI Acquisition and Processing Pipeline for Radiation Treatment Planning and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jutras, Jean-David

    MRI-only Radiation Treatment Planning (RTP) is becoming increasingly popular because of a simplified work-flow, and less inconvenience to the patient who avoids multiple scans. The advantages of MRI-based RTP over traditional CT-based RTP lie in its superior soft-tissue contrast, and absence of ionizing radiation dose. The lack of electron-density information in MRI can be addressed by automatic tissue classification. To distinguish bone from air, which both appear dark in MRI, an ultra-short echo time (UTE) pulse sequence may be used. Quantitative MRI parametric maps can provide improved tissue segmentation/classification and better sensitivity in monitoring disease progression and treatment outcome thanmore » standard weighted images. Superior tumor contrast can be achieved on pure T{sub 1} images compared to conventional T{sub 1}-weighted images acquired in the same scan duration and voxel resolution. In this study, we have developed a robust and fast quantitative MRI acquisition and post-processing work-flow that integrates these latest advances into the MRI-based RTP of brain lesions. Using 3D multi-echo FLASH images at two different optimized flip angles (both acquired in under 9 min, and 1mm isotropic resolution), parametric maps of T{sub 1}, proton-density (M{sub 0}), and T{sub 2}{sup *} are obtained with high contrast-to-noise ratio, and negligible geometrical distortions, water-fat shifts and susceptibility effects. An additional 3D UTE MRI dataset is acquired (in under 4 min) and post-processed to classify tissues for dose simulation. The pipeline was tested on four healthy volunteers and a clinical trial on brain cancer patients is underway.« less

  5. Fast and automatic depth control of iterative bone ablation based on optical coherence tomography data

    NASA Astrophysics Data System (ADS)

    Fuchs, Alexander; Pengel, Steffen; Bergmeier, Jan; Kahrs, Lüder A.; Ortmaier, Tobias

    2015-07-01

    Laser surgery is an established clinical procedure in dental applications, soft tissue ablation, and ophthalmology. The presented experimental set-up for closed-loop control of laser bone ablation addresses a feedback system and enables safe ablation towards anatomical structures that usually would have high risk of damage. This study is based on combined working volumes of optical coherence tomography (OCT) and Er:YAG cutting laser. High level of automation in fast image data processing and tissue treatment enables reproducible results and shortens the time in the operating room. For registration of the two coordinate systems a cross-like incision is ablated with the Er:YAG laser and segmented with OCT in three distances. The resulting Er:YAG coordinate system is reconstructed. A parameter list defines multiple sets of laser parameters including discrete and specific ablation rates as ablation model. The control algorithm uses this model to plan corrective laser paths for each set of laser parameters and dynamically adapts the distance of the laser focus. With this iterative control cycle consisting of image processing, path planning, ablation, and moistening of tissue the target geometry and desired depth are approximated until no further corrective laser paths can be set. The achieved depth stays within the tolerances of the parameter set with the smallest ablation rate. Specimen trials with fresh porcine bone have been conducted to prove the functionality of the developed concept. Flat bottom surfaces and sharp edges of the outline without visual signs of thermal damage verify the feasibility of automated, OCT controlled laser bone ablation with minimal process time.

  6. Performance analysis of model based iterative reconstruction with dictionary learning in transportation security CT

    NASA Astrophysics Data System (ADS)

    Haneda, Eri; Luo, Jiajia; Can, Ali; Ramani, Sathish; Fu, Lin; De Man, Bruno

    2016-05-01

    In this study, we implement and compare model based iterative reconstruction (MBIR) with dictionary learning (DL) over MBIR with pairwise pixel-difference regularization, in the context of transportation security. DL is a technique of sparse signal representation using an over complete dictionary which has provided promising results in image processing applications including denoising,1 as well as medical CT reconstruction.2 It has been previously reported that DL produces promising results in terms of noise reduction and preservation of structural details, especially for low dose and few-view CT acquisitions.2 A distinguishing feature of transportation security CT is that scanned baggage may contain items with a wide range of material densities. While medical CT typically scans soft tissues, blood with and without contrast agents, and bones, luggage typically contains more high density materials (i.e. metals and glass), which can produce severe distortions such as metal streaking artifacts. Important factors of security CT are the emphasis on image quality such as resolution, contrast, noise level, and CT number accuracy for target detection. While MBIR has shown exemplary performance in the trade-off of noise reduction and resolution preservation, we demonstrate that DL may further improve this trade-off. In this study, we used the KSVD-based DL3 combined with the MBIR cost-minimization framework and compared results to Filtered Back Projection (FBP) and MBIR with pairwise pixel-difference regularization. We performed a parameter analysis to show the image quality impact of each parameter. We also investigated few-view CT acquisitions where DL can show an additional advantage relative to pairwise pixel difference regularization.

  7. Demonstration of optical parametric gain generation in the 1 μm regime based on a photonic crystal fiber pumped by a picosecond mode-locked ytterbium-doped fiber laser

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Yang, Si-Gang; Wang, Xiao-Jian; Gou, Dou-Dou; Chen, Hong-Wei; Chen, Ming-Hua; Xie, Shi-Zhong

    2014-01-01

    We report the experimental demonstration of the optical parametric gain generation in the 1 μm regime based on a photonic crystal fiber (PCF) with a zero group velocity dispersion (GVD) wavelength of 1062 nm pumped by a homemade tunable picosecond mode-locked ytterbium-doped fiber laser. A broad parametric gain band is obtained by pumping the PCF in the anomalous GVD regime with a relatively low power. Two separated narrow parametric gain bands are observed by pumping the PCF in the normal GVD regime. The peak of the parametric gain profile can be tuned from 927 to 1038 nm and from 1099 to 1228 nm. This widely tunable parametric gain band can be used for a broad band optical parametric amplifier, large span wavelength conversion or a tunable optical parametric oscillator.

  8. Discrete-Time Local Value Iteration Adaptive Dynamic Programming: Admissibility and Termination Analysis.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao

    In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.

  9. Robotics in scansorial environments

    NASA Astrophysics Data System (ADS)

    Autumn, Kellar; Buehler, Martin; Cutkosky, Mark; Fearing, Ronald; Full, Robert J.; Goldman, Daniel; Groff, Richard; Provancher, William; Rizzi, Alfred A.; Saranli, Uluc; Saunders, Aaron; Koditschek, Daniel E.

    2005-05-01

    We review a large multidisciplinary effort to develop a family of autonomous robots capable of rapid, agile maneuvers in and around natural and artificial vertical terrains such as walls, cliffs, caves, trees and rubble. Our robot designs are inspired by (but not direct copies of) biological climbers such as cockroaches, geckos, and squirrels. We are incorporating advanced materials (e.g., synthetic gecko hairs) into these designs and fabricating them using state of the art rapid prototyping techniques (e.g., shape deposition manufacturing) that permit multiple iterations of design and testing with an effective integration path for the novel materials and components. We are developing novel motion control techniques to support dexterous climbing behaviors that are inspired by neuroethological studies of animals and descended from earlier frameworks that have proven analytically tractable and empirically sound. Our near term behavioral targets call for vertical climbing on soft (e.g., bark) or rough surfaces and for ascents on smooth, hard steep inclines (e.g., 60 degree slopes on metal or glass sheets) at one body length per second.

  10. An algorithm of improving speech emotional perception for hearing aid

    NASA Astrophysics Data System (ADS)

    Xi, Ji; Liang, Ruiyu; Fei, Xianju

    2017-07-01

    In this paper, a speech emotion recognition (SER) algorithm was proposed to improve the emotional perception of hearing-impaired people. The algorithm utilizes multiple kernel technology to overcome the drawback of SVM: slow training speed. Firstly, in order to improve the adaptive performance of Gaussian Radial Basis Function (RBF), the parameter determining the nonlinear mapping was optimized on the basis of Kernel target alignment. Then, the obtained Kernel Function was used as the basis kernel of Multiple Kernel Learning (MKL) with slack variable that could solve the over-fitting problem. However, the slack variable also brings the error into the result. Therefore, a soft-margin MKL was proposed to balance the margin against the error. Moreover, the relatively iterative algorithm was used to solve the combination coefficients and hyper-plane equations. Experimental results show that the proposed algorithm can acquire an accuracy of 90% for five kinds of emotions including happiness, sadness, anger, fear and neutral. Compared with KPCA+CCA and PIM-FSVM, the proposed algorithm has the highest accuracy.

  11. [Numerical finite element modeling of custom car seat using computer aided design].

    PubMed

    Huang, Xuqi; Singare, Sekou

    2014-02-01

    A good cushion can not only provide the sitter with a high comfort, but also control the distribution of the hip pressure to reduce the incidence of diseases. The purpose of this study is to introduce a computer-aided design (CAD) modeling method of the buttocks-cushion using numerical finite element (FE) simulation to predict the pressure distribution on the buttocks-cushion interface. The buttock and the cushion model geometrics were acquired from a laser scanner, and the CAD software was used to create the solid model. The FE model of a true seated individual was developed using ANSYS software (ANSYS Inc, Canonsburg, PA). The model is divided into two parts, i.e. the cushion model made of foam and the buttock model represented by the pelvis covered with a soft tissue layer. Loading simulations consisted of imposing a vertical force of 520N on the pelvis, corresponding to the weight of the user upper extremity, and then solving iteratively the system.

  12. User engineering: A new look at system engineering

    NASA Technical Reports Server (NTRS)

    Mclaughlin, Larry L.

    1987-01-01

    User Engineering is a new System Engineering perspective responsible for defining and maintaining the user view of the system. Its elements are a process to guide the project and customer, a multidisciplinary team including hard and soft sciences, rapid prototyping tools to build user interfaces quickly and modify them frequently at low cost, and a prototyping center for involving users and designers in an iterative way. The main consideration is reducing the risk that the end user will not or cannot effectively use the system. The process begins with user analysis to produce cognitive and work style models, and task analysis to produce user work functions and scenarios. These become major drivers of the human computer interface design which is presented and reviewed as an interactive prototype by users. Feedback is rapid and productive, and user effectiveness can be measured and observed before the system is built and fielded. Requirements are derived via the prototype and baselined early to serve as an input to the architecture and software design.

  13. A Novel Strategy Using Factor Graphs and the Sum-Product Algorithm for Satellite Broadcast Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh

    This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.

  14. New second order Mumford-Shah model based on Γ-convergence approximation for image processing

    NASA Astrophysics Data System (ADS)

    Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li

    2016-05-01

    In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.

  15. On optimal soft-decision demodulation

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1975-01-01

    Wozencraft and Kennedy have suggested that the appropriate demodulator criterion of goodness is the cut-off rate of the discrete memoryless channel created by the modulation system; the criterion of goodness adopted in this note is the symmetric cut-off rate which differs from the former criterion only in that the signals are assumed equally likely. Massey's necessary condition for optimal demodulation of binary signals is generalized to M-ary signals. It is shown that the optimal demodulator decision regions in likelihood space are bounded by hyperplanes. An iterative method is formulated for finding these optimal decision regions from an initial good quess. For additive white Gaussian noise, the corresponding optimal decision regions in signal space are bounded by hypersurfaces with hyperplane asymptotes; these asymptotes themselves bound the decision regions of a demodulator which, in several examples, is shown to be virtually optimal. In many cases, the necessary condition for demodulator optimality is also sufficient, but a counter example to its general sufficiency is given.

  16. Hyperbolic and semi-parametric models in finance

    NASA Astrophysics Data System (ADS)

    Bingham, N. H.; Kiesel, Rüdiger

    2001-02-01

    The benchmark Black-Scholes-Merton model of mathematical finance is parametric, based on the normal/Gaussian distribution. Its principal parametric competitor, the hyperbolic model of Barndorff-Nielsen, Eberlein and others, is briefly discussed. Our main theme is the use of semi-parametric models, incorporating the mean vector and covariance matrix as in the Markowitz approach, plus a non-parametric part, a scalar function incorporating features such as tail-decay. Implementation is also briefly discussed.

  17. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HIPP,JAMES R.; MOORE,SUSAN G.; MYERS,STEPHEN C.

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis formore » accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.« less

  18. Understanding and Predicting Profile Structure and Parametric Scaling of Intrinsic Rotation

    NASA Astrophysics Data System (ADS)

    Wang, Weixing

    2016-10-01

    It is shown for the first time that turbulence-driven residual Reynolds stress can account for both the shape and magnitude of the observed intrinsic toroidal rotation profile. Nonlinear, global gyrokinetic simulations using GTS of DIII-D ECH plasmas indicate a substantial ITG fluctuation-induced non-diffusive momentum flux generated around a mid-radius-peaked intrinsic toroidal rotation profile. The non-diffusive momentum flux is dominated by the residual stress with a negligible contribution from the momentum pinch. The residual stress profile shows a robust anti-gradient, dipole structure in a set of ECH discharges with varying ECH power. Such interesting features of non-diffusive momentum fluxes, in connection with edge momentum sources and sinks, are found to be critical to drive the non-monotonic core rotation profiles in the experiments. Both turbulence intensity gradient and zonal flow ExB shear are identified as major contributors to the generation of the k∥-asymmetry needed for the residual stress generation. By balancing the residual stress and the momentum diffusion, a self-organized, steady-state rotation profile is calculated. The predicted core rotation profiles agree well with the experimentally measured main-ion toroidal rotation. The validated model is further used to investigate the characteristic dependence of global rotation profile structure in the multi-dimensional parametric space covering turbulence type, q-profile structure and collisionality with the goal of developing physics understanding needed for rotation profile control and optimization. Interesting results obtained include intrinsic rotation reversal induced by ITG-TEM transition in flat-q profile regime and by change in q-profile from weak to normal shear.. Fluctuation-generated poloidal Reynolds stress is also shown to significantly modify the neoclassical poloidal rotation in a way consistent with experimental observations. Finally, the first-principles-based model is applied to studying the ρ * -scaling and predicting rotations in ITER regime. Work supported by U.S. DOE Contract DE-AC02-09-CH11466.

  19. Parametric Cost Deployment

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1995-01-01

    Parametric cost analysis is a mathematical approach to estimating cost. Parametric cost analysis uses non-cost parameters, such as quality characteristics, to estimate the cost to bring forth, sustain, and retire a product. This paper reviews parametric cost analysis and shows how it can be used within the cost deployment process.

  20. Investigation of the photon statistics of parametric fluorescence in a traveling-wave parametric amplifier by means of self-homodyne tomography.

    PubMed

    Vasilyev, M; Choi, S K; Kumar, P; D'Ariano, G M

    1998-09-01

    Photon-number distributions for parametric fluorescence from a nondegenerate optical parametric amplifier are measured with a novel self-homodyne technique. These distributions exhibit the thermal-state character predicted by theory. However, a difference between the fluorescence gain and the signal gain of the parametric amplifier is observed. We attribute this difference to a change in the signal-beam profile during the traveling-wave pulsed amplification process.

  1. A Cartesian parametrization for the numerical analysis of material instability

    DOE PAGES

    Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.; ...

    2016-02-25

    We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less

  2. A Cartesian parametrization for the numerical analysis of material instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.

    We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less

  3. Low Impact Docking System (LIDS)

    NASA Technical Reports Server (NTRS)

    LaBauve, Tobie E.

    2009-01-01

    Since 1996, NASA has been developing a docking system that will simplify operations and reduce risks associated with mating spacecraft. This effort has focused on developing and testing an original, reconfigurable, active, closed-loop, force-feedback controlled docking system using modern technologies. The primary objective of this effort has been to design a docking interface that is tunable to the unique performance requirements for all types of mating operations (i.e. docking and berthing, autonomous and piloted rendezvous, and in-space assembly of vehicles, modules and structures). The docking system must also support the transfer of crew, cargo, power, fluid, and data. As a result of the past 10 years of docking system advancement, the Low Impact Docking System or LIDS was developed. The current LIDS design incorporates the lessons learned and development experiences from both previous and existing docking systems. LIDS feasibility was established through multiple iterations of prototype hardware development and testing. Benefits of LIDS include safe, low impact mating operations, more effective and flexible mission implementation with an anytime/anywhere mating capability, system level redundancy, and a more affordable and sustainable mission architecture with reduced mission and life cycle costs. In 1996 the LIDS project, then known as the Advanced Docking Berthing System (ADBS) project, launched a four year developmental period. At the end of the four years, the team had built a prototype of the soft-capture hardware and verified the control system that will be used to control the soft-capture system. In 2001, the LIDS team was tasked to work with the X- 38 Crew Return Vehicle (CRV) project and build its first Engineering Development Unit (EDU).

  4. Evaluation of automatic image quality assessment in chest CT - A human cadaver study.

    PubMed

    Franck, Caro; De Crop, An; De Roo, Bieke; Smeets, Peter; Vergauwen, Merel; Dewaele, Tom; Van Borsel, Mathias; Achten, Eric; Van Hoof, Tom; Bacher, Klaus

    2017-04-01

    The evaluation of clinical image quality (IQ) is important to optimize CT protocols and to keep patient doses as low as reasonably achievable. Considering the significant amount of effort needed for human observer studies, automatic IQ tools are a promising alternative. The purpose of this study was to evaluate automatic IQ assessment in chest CT using Thiel embalmed cadavers. Chest CT's of Thiel embalmed cadavers were acquired at different exposures. Clinical IQ was determined by performing a visual grading analysis. Physical-technical IQ (noise, contrast-to-noise and contrast-detail) was assessed in a Catphan phantom. Soft and sharp reconstructions were made with filtered back projection and two strengths of iterative reconstruction. In addition to the classical IQ metrics, an automatic algorithm was used to calculate image quality scores (IQs). To be able to compare datasets reconstructed with different kernels, the IQs values were normalized. Good correlations were found between IQs and the measured physical-technical image quality: noise (ρ=-1.00), contrast-to-noise (ρ=1.00) and contrast-detail (ρ=0.96). The correlation coefficients between IQs and the observed clinical image quality of soft and sharp reconstructions were 0.88 and 0.93, respectively. The automatic scoring algorithm is a promising tool for the evaluation of thoracic CT scans in daily clinical practice. It allows monitoring of the image quality of a chest protocol over time, without human intervention. Different reconstruction kernels can be compared after normalization of the IQs. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. The Effect of Iteration on the Design Performance of Primary School Children

    ERIC Educational Resources Information Center

    Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.

    2015-01-01

    Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…

  6. Quantifying sampling noise and parametric uncertainty in atomistic-to-continuum simulations using surrogate models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salloum, Maher N.; Sargsyan, Khachik; Jones, Reese E.

    2015-08-11

    We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomistic-to-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We alsomore » consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. As a result, the uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.« less

  7. Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization

    NASA Technical Reports Server (NTRS)

    Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.

    2014-01-01

    Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.

  8. Multiple Scattering in Random Mechanical Systems and Diffusion Approximation

    NASA Astrophysics Data System (ADS)

    Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun

    2013-10-01

    This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.

  9. The nonlinear Galerkin method: A multi-scale method applied to the simulation of homogeneous turbulent flows

    NASA Technical Reports Server (NTRS)

    Debussche, A.; Dubois, T.; Temam, R.

    1993-01-01

    Using results of Direct Numerical Simulation (DNS) in the case of two-dimensional homogeneous isotropic flows, the behavior of the small and large scales of Kolmogorov like flows at moderate Reynolds numbers are first analyzed in detail. Several estimates on the time variations of the small eddies and the nonlinear interaction terms were derived; those terms play the role of the Reynolds stress tensor in the case of LES. Since the time step of a numerical scheme is determined as a function of the energy-containing eddies of the flow, the variations of the small scales and of the nonlinear interaction terms over one iteration can become negligible by comparison with the accuracy of the computation. Based on this remark, a multilevel scheme which treats differently the small and the large eddies was proposed. Using mathematical developments, estimates of all the parameters involved in the algorithm, which then becomes a completely self-adaptive procedure were derived. Finally, realistic simulations of (Kolmorov like) flows over several eddy-turnover times were performed. The results are analyzed in detail and a parametric study of the nonlinear Galerkin method is performed.

  10. Study of Solid Particle Behavior in High Temperature Gas Flows

    NASA Astrophysics Data System (ADS)

    Majid, A.; Bauder, U.; Stindl, T.; Fertig, M.; Herdrich, G.; Röser, H.-P.

    2009-01-01

    The Euler-Lagrangian approach is used for the simulation of solid particles in hypersonic entry flows. For flow field simulation, the program SINA (Sequential Iterative Non-equilibrium Algorithm) developed at the Institut für Raumfahrtsysteme is used. The model for the effect of the carrier gas on a particle includes drag force and particle heating only. Other parameters like lift Magnus force or damping torque are not taken into account so far. The reverse effect of the particle phase on the gaseous phase is currently neglected. Parametric analysis is done regarding the impact of variation in the physical input conditions like position, velocity, size and material of the particle. Convective heat fluxes onto the surface of the particle and its radiative cooling are discussed. The variation of particle temperature under different conditions is presented. The influence of various input conditions on the trajectory is explained. A semi empirical model for the particle wall interaction is also discussed and the influence of the wall on the particle trajectory with different particle conditions is presented. The heat fluxes onto the wall due to impingement of particles are also computed and compared with the heat fluxes from the gas.

  11. The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, Daniel; Sargsyan, Khachik; Thornton, Peter

    2018-02-01

    We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. About 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). The relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.

  12. A Quasi-Dynamic Approach to modelling Hydrodynamic Focusing

    NASA Astrophysics Data System (ADS)

    Kommajosula, Aditya; Xu, Songzhe; Wu, Chueh-Yu; di Carlo, Dino; Ganapathysubramanian, Baskar; ComPM Lab Team; Di Carlo Lab Collaboration

    2016-11-01

    We examine a particle's tendency at different spatial locations to shift/rotate towards the equilibrium location, by constrained simulation. Although studies in the past have used this procedure in conjunction with FSI methods to great effect, the current work in 2D explores an alternative approach by utilizing a modified trust-region-based root-finding algorithm to solve for particle position and velocities at equilibrium, using "snapshots" of finite-element solutions to the steady-state Navier-Stokes equations iteratively over a computational domain attached to the particle reference frame. Through an assortment of test cases comprising circular and non-circular particle geometries, an incorporation of stability theory as applicable to dynamical systems is demonstrated, to locate the final focusing location and velocities. The results are compared with previous experimental/numerical reports, and found to be in close agreement. A thousand-fold increase is observed in computational time for the current workflow from its transient counterpart, for an illustrative case. The current framework is formulated in 2D for 3 Degrees-of-Freedom, and will be extended to 3D. This framework potentially allows for quick, high-throughput parametric space studies of equilibrium scaling laws.

  13. Detecting subject-specific activations using fuzzy clustering

    PubMed Central

    Seghier, Mohamed L.; Friston, Karl J.; Price, Cathy J.

    2007-01-01

    Inter-subject variability in evoked brain responses is attracting attention because it may reflect important variability in structure–function relationships over subjects. This variability could be a signature of degenerate (many-to-one) structure–function mappings in normal subjects or reflect changes that are disclosed by brain damage. In this paper, we describe a non-iterative fuzzy clustering algorithm (FCP: fuzzy clustering with fixed prototypes) for characterizing inter-subject variability in between-subject or second-level analyses of fMRI data. The approach identifies the contribution of each subject to response profiles in voxels surviving a classical F-statistic criterion. The output identifies subjects who drive activation in specific cortical regions (local effects) or in voxels distributed across neural systems (global effects). The sensitivity of the approach was assessed in 38 normal subjects performing an overt naming task. FCP revealed that several subjects had either abnormally high or abnormally low responses. FCP may be particularly useful for characterizing outlier responses in rare patients or heterogeneous populations. In these cases, atypical activations may not be detected by standard tests, under parametric assumptions. The advantage of using FCP is that it searches all voxels systematically and can identify atypical activation patterns in a quantitative and unsupervised manner. PMID:17478103

  14. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy.

    PubMed

    Zhou, Dong; Zhang, Hui; Ye, Peiqing

    2016-01-01

    Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator.

  15. Developing collaborative classifiers using an expert-based model

    USGS Publications Warehouse

    Mountrakis, G.; Watts, R.; Luo, L.; Wang, Jingyuan

    2009-01-01

    This paper presents a hierarchical, multi-stage adaptive strategy for image classification. We iteratively apply various classification methods (e.g., decision trees, neural networks), identify regions of parametric and geographic space where accuracy is low, and in these regions, test and apply alternate methods repeating the process until the entire image is classified. Currently, classifiers are evaluated through human input using an expert-based system; therefore, this paper acts as the proof of concept for collaborative classifiers. Because we decompose the problem into smaller, more manageable sub-tasks, our classification exhibits increased flexibility compared to existing methods since classification methods are tailored to the idiosyncrasies of specific regions. A major benefit of our approach is its scalability and collaborative support since selected low-accuracy classifiers can be easily replaced with others without affecting classification accuracy in high accuracy areas. At each stage, we develop spatially explicit accuracy metrics that provide straightforward assessment of results by non-experts and point to areas that need algorithmic improvement or ancillary data. Our approach is demonstrated in the task of detecting impervious surface areas, an important indicator for human-induced alterations to the environment, using a 2001 Landsat scene from Las Vegas, Nevada. ?? 2009 American Society for Photogrammetry and Remote Sensing.

  16. Stratospheric Airship Design Sensitivity

    NASA Astrophysics Data System (ADS)

    Smith, Ira Steve; Fortenberry, Michael; Noll, . James; Perry, William

    2012-07-01

    The concept of a stratospheric or high altitude powered platform has been around almost as long as stratospheric free balloons. Airships are defined as Lighter-Than-Air (LTA) vehicles with propulsion and steering systems. Over the past five (5) years there has been an increased interest by the U. S. Department of Defense as well as commercial enterprises in airships at all altitudes. One of these interests is in the area of stratospheric airships. Whereas DoD is primarily interested in things that look down, such platforms offer a platform for science applications, both downward and outward looking. Designing airships to operate in the stratosphere is very challenging due to the extreme high altitude environment. It is significantly different than low altitude airship designs such as observed in the familiar advertising or tourism airships or blimps. The stratospheric airship design is very dependent on the specific application and the particular requirements levied on the vehicle with mass and power limits. The design is a complex iterative process and is sensitive to many factors. In an effort to identify the key factors that have the greatest impacts on the design, a parametric analysis of a simplified airship design has been performed. The results of these studies will be presented.

  17. Semiparametric Time-to-Event Modeling in the Presence of a Latent Progression Event

    PubMed Central

    Rice, John D.; Tsodikov, Alex

    2017-01-01

    Summary In cancer research, interest frequently centers on factors influencing a latent event that must precede a terminal event. In practice it is often impossible to observe the latent event precisely, making inference about this process difficult. To address this problem, we propose a joint model for the unobserved time to the latent and terminal events, with the two events linked by the baseline hazard. Covariates enter the model parametrically as linear combinations that multiply, respectively, the hazard for the latent event and the hazard for the terminal event conditional on the latent one. We derive the partial likelihood estimators for this problem assuming the latent event is observed, and propose a profile likelihood–based method for estimation when the latent event is unobserved. The baseline hazard in this case is estimated nonparametrically using the EM algorithm, which allows for closed-form Breslow-type estimators at each iteration, bringing improved computational efficiency and stability compared with maximizing the marginal likelihood directly. We present simulation studies to illustrate the finite-sample properties of the method; its use in practice is demonstrated in the analysis of a prostate cancer data set. PMID:27556886

  18. Semiparametric time-to-event modeling in the presence of a latent progression event.

    PubMed

    Rice, John D; Tsodikov, Alex

    2017-06-01

    In cancer research, interest frequently centers on factors influencing a latent event that must precede a terminal event. In practice it is often impossible to observe the latent event precisely, making inference about this process difficult. To address this problem, we propose a joint model for the unobserved time to the latent and terminal events, with the two events linked by the baseline hazard. Covariates enter the model parametrically as linear combinations that multiply, respectively, the hazard for the latent event and the hazard for the terminal event conditional on the latent one. We derive the partial likelihood estimators for this problem assuming the latent event is observed, and propose a profile likelihood-based method for estimation when the latent event is unobserved. The baseline hazard in this case is estimated nonparametrically using the EM algorithm, which allows for closed-form Breslow-type estimators at each iteration, bringing improved computational efficiency and stability compared with maximizing the marginal likelihood directly. We present simulation studies to illustrate the finite-sample properties of the method; its use in practice is demonstrated in the analysis of a prostate cancer data set. © 2016, The International Biometric Society.

  19. [Glossary of terms used by radiologists in image processing].

    PubMed

    Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P

    1995-01-01

    We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.

  20. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Hanquan

    2016-03-01

    In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.

  1. Metal artifact correction for x-ray computed tomography using kV and selective MV imaging.

    PubMed

    Wu, Meng; Keil, Andreas; Constantin, Dragos; Star-Lack, Josh; Zhu, Lei; Fahrig, Rebecca

    2014-12-01

    The overall goal of this work is to improve the computed tomography (CT) image quality for patients with metal implants or fillings by completing the missing kilovoltage (kV) projection data with selectively acquired megavoltage (MV) data that do not suffer from photon starvation. When both of these imaging systems, which are available on current radiotherapy devices, are used, metal streak artifacts are avoided, and the soft-tissue contrast is restored, even for regions in which the kV data cannot contribute any information. Three image-reconstruction methods, including two filtered back-projection (FBP)-based analytic methods and one iterative method, for combining kV and MV projection data from the two on-board imaging systems of a radiotherapy device are presented in this work. The analytic reconstruction methods modify the MV data based on the information in the projection or image domains and then patch the data onto the kV projections for a FBP reconstruction. In the iterative reconstruction, the authors used dual-energy (DE) penalized weighted least-squares (PWLS) methods to simultaneously combine the kV/MV data and perform the reconstruction. The authors compared kV/MV reconstructions to kV-only reconstructions using a dental phantom with fillings and a hip-implant numerical phantom. Simulation results indicated that dual-energy sinogram patch FBP and the modified dual-energy PWLS method can successfully suppress metal streak artifacts and restore information lost due to photon starvation in the kV projections. The root-mean-square errors of soft-tissue patterns obtained using combined kV/MV data are 10-15 Hounsfield units smaller than those of the kV-only images, and the structural similarity index measure also indicates a 5%-10% improvement in the image quality. The added dose from the MV scan is much less than the dose from the kV scan if a high efficiency MV detector is assumed. The authors have shown that it is possible to improve the image quality of kV CTs for patients with metal implants or fillings by completing the missing kV projection data with selectively acquired MV data that do not suffer from photon starvation. Numerical simulations demonstrated that dual-energy sinogram patch FBP and a modified kV/MV PWLS method can successfully suppress metal streak artifacts and restore information lost due to photon starvation in kV projections. Combined kV/MV images may permit the improved delineation of structures of interest in CT images for patients with metal implants or fillings.

  2. Parametric nanomechanical amplification at very high frequency.

    PubMed

    Karabalin, R B; Feng, X L; Roukes, M L

    2009-09-01

    Parametric resonance and amplification are important in both fundamental physics and technological applications. Here we report very high frequency (VHF) parametric resonators and mechanical-domain amplifiers based on nanoelectromechanical systems (NEMS). Compound mechanical nanostructures patterned by multilayer, top-down nanofabrication are read out by a novel scheme that parametrically modulates longitudinal stress in doubly clamped beam NEMS resonators. Parametric pumping and signal amplification are demonstrated for VHF resonators up to approximately 130 MHz and provide useful enhancement of both resonance signal amplitude and quality factor. We find that Joule heating and reduced thermal conductance in these nanostructures ultimately impose an upper limit to device performance. We develop a theoretical model to account for both the parametric response and nonequilibrium thermal transport in these composite nanostructures. The results closely conform to our experimental observations, elucidate the frequency and threshold-voltage scaling in parametric VHF NEMS resonators and sensors, and establish the ultimate sensitivity limits of this approach.

  3. Parametric amplification in MoS2 drum resonator.

    PubMed

    Prasad, Parmeshwar; Arora, Nishta; Naik, A K

    2017-11-30

    Parametric amplification is widely used in diverse areas from optics to electronic circuits to enhance low level signals by varying relevant system parameters. Parametric amplification has also been performed in several micro-nano resonators including nano-electromechanical system (NEMS) resonators based on a two-dimensional (2D) material. Here, we report the enhancement of mechanical response in a MoS 2 drum resonator using degenerate parametric amplification. We use parametric pumping to modulate the spring constant of the MoS 2 resonator and achieve a 10 dB amplitude gain. We also demonstrate quality factor enhancement in the resonator with parametric amplification. We investigate the effect of cubic nonlinearity on parametric amplification and show that it limits the gain of the mechanical resonator. Amplifying ultra-small displacements at room temperature and understanding the limitations of the amplification in these devices is key for using these devices for practical applications.

  4. Problems of the design of low-noise input devices. [parametric amplifiers

    NASA Technical Reports Server (NTRS)

    Manokhin, V. M.; Nemlikher, Y. A.; Strukov, I. A.; Sharfov, Y. A.

    1974-01-01

    An analysis is given of the requirements placed on the elements of parametric centimeter waveband amplifiers for achievement of minimal noise temperatures. A low-noise semiconductor parametric amplifier using germanium parametric diodes for a receiver operating in the 4 GHz band was developed and tested confirming the possibility of satisfying all requirements.

  5. Sensitivity enhancement in swept-source optical coherence tomography by parametric balanced detector and amplifier

    PubMed Central

    Kang, Jiqiang; Wei, Xiaoming; Li, Bowen; Wang, Xie; Yu, Luoqin; Tan, Sisi; Jinata, Chandra; Wong, Kenneth K. Y.

    2016-01-01

    We proposed a sensitivity enhancement method of the interference-based signal detection approach and applied it on a swept-source optical coherence tomography (SS-OCT) system through all-fiber optical parametric amplifier (FOPA) and parametric balanced detector (BD). The parametric BD was realized by combining the signal and phase conjugated idler band that was newly-generated through FOPA, and specifically by superimposing these two bands at a photodetector. The sensitivity enhancement by FOPA and parametric BD in SS-OCT were demonstrated experimentally. The results show that SS-OCT with FOPA and SS-OCT with parametric BD can provide more than 9 dB and 12 dB sensitivity improvement, respectively, when compared with the conventional SS-OCT in a spectral bandwidth spanning over 76 nm. To further verify and elaborate their sensitivity enhancement, a bio-sample imaging experiment was conducted on loach eyes by conventional SS-OCT setup, SS-OCT with FOPA and parametric BD at different illumination power levels. All these results proved that using FOPA and parametric BD could improve the sensitivity significantly in SS-OCT systems. PMID:27446655

  6. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    PubMed

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A novel method to quantify and compare anatomical shape: application in cervix cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Oh, Seungjong; Jaffray, David; Cho, Young-Bin

    2014-06-01

    Adaptive radiation therapy (ART) had been proposed to restore dosimetric deficiencies during treatment delivery. In this paper, we developed a technique of Geometric reLocation for analyzing anatomical OBjects' Evolution (GLOBE) for a numerical model of tumor evolution under radiation therapy and characterized geometric changes of the target using GLOBE. A total of 174 clinical target volumes (CTVs) obtained from 32 cervical cancer patients were analyzed. GLOBE consists of three main steps; step (1) deforming a 3D surface object to a sphere by parametric active contour (PAC), step (2) sampling a deformed PAC on 642 nodes of icosahedron geodesic dome for reference frame, and step (3) unfolding 3D data to 2D plane for convenient visualization and analysis. The performance was evaluated with respect to (1) convergence of deformation (iteration number and computation time) and (2) accuracy of deformation (residual deformation). Based on deformation vectors from planning CTV to weekly CTVs, target specific (TS) margins were calculated on each sampled node of GLOBE and the systematic (Σ) and random (σ) variations of the vectors were calculated. Population based anisotropic (PBA) margins were generated using van Herk's margin recipe. GLOBE successfully modeled 152 CTVs from 28 patients. Fast convergence was observed for most cases (137/152) with the iteration number of 65 ± 74 (average ± STD) and the computation time of 13.7 ± 18.6 min. Residual deformation of PAC was 0.9 ± 0.7 mm and more than 97% was less than 3 mm. Margin analysis showed random nature of TS-margin. As a consequence, PBA-margins perform similarly to ISO-margins. For example, PBA-margins for 90% patients' coverage with 95% dose level is close to 13 mm ISO-margins in the aspect of target coverage and OAR sparing. GLOBE demonstrates a systematic analysis of tumor motion and deformation of patients with cervix cancer during radiation therapy and numerical modeling of PBA-margin on 642 locations of CTV surface.

  8. The topographic development and areal parametric characterization of a stratified surface polished by mass finishing

    NASA Astrophysics Data System (ADS)

    Walton, Karl; Blunt, Liam; Fleming, Leigh

    2015-09-01

    Mass finishing is amongst the most widely used finishing processes in modern manufacturing, in applications from deburring to edge radiusing and polishing. Processing objectives are varied, ranging from the cosmetic to the functionally critical. One such critical application is the hydraulically smooth polishing of aero engine component gas-washed surfaces. In this, and many other applications the drive to improve process control and finish tolerance is ever present. Considering its widespread use mass finishing has seen limited research activity, particularly with respect to surface characterization. The objectives of the current paper are to; characterise the mass finished stratified surface and its development process using areal surface parameters, provide guidance on the optimal parameters and sampling method to characterise this surface type for a given application, and detail the spatial variation in surface topography due to coupon edge shadowing. Blasted and peened square plate coupons in titanium alloy are wet (vibro) mass finished iteratively with increasing duration. Measurement fields are precisely relocated between iterations by fixturing and an image superimposition alignment technique. Surface topography development is detailed with ‘log of process duration’ plots of the ‘areal parameters for scale-limited stratified functional surfaces’, (the Sk family). Characteristic features of the Smr2 plot are seen to map out the processing of peak, core and dale regions in turn. These surface process regions also become apparent in the ‘log of process duration’ plot for Sq, where lower core and dale regions are well modelled by logarithmic functions. Surface finish (Ra or Sa) with mass finishing duration is currently predicted with an exponential model. This model is shown to be limited for the current surface type at a critical range of surface finishes. Statistical analysis provides a group of areal parameters including; Vvc, Sq, and Sdq, showing optimal discrimination for a specific range of surface finish outcomes. As a consequence of edge shadowing surface segregation is suggested for characterization purposes.

  9. Quantification and parametrization of non-linearity effects by higher-order sensitivity terms in scattered light differential optical absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Puķīte, Jānis; Wagner, Thomas

    2016-05-01

    We address the application of differential optical absorption spectroscopy (DOAS) of scattered light observations in the presence of strong absorbers (in particular ozone), for which the absorption optical depth is a non-linear function of the trace gas concentration. This is the case because Beer-Lambert law generally does not hold for scattered light measurements due to many light paths contributing to the measurement. While in many cases linear approximation can be made, for scenarios with strong absorptions non-linear effects cannot always be neglected. This is especially the case for observation geometries, for which the light contributing to the measurement is crossing the atmosphere under spatially well-separated paths differing strongly in length and location, like in limb geometry. In these cases, often full retrieval algorithms are applied to address the non-linearities, requiring iterative forward modelling of absorption spectra involving time-consuming wavelength-by-wavelength radiative transfer modelling. In this study, we propose to describe the non-linear effects by additional sensitivity parameters that can be used e.g. to build up a lookup table. Together with widely used box air mass factors (effective light paths) describing the linear response to the increase in the trace gas amount, the higher-order sensitivity parameters eliminate the need for repeating the radiative transfer modelling when modifying the absorption scenario even in the presence of a strong absorption background. While the higher-order absorption structures can be described as separate fit parameters in the spectral analysis (so-called DOAS fit), in practice their quantitative evaluation requires good measurement quality (typically better than that available from current measurements). Therefore, we introduce an iterative retrieval algorithm correcting for the higher-order absorption structures not yet considered in the DOAS fit as well as the absorption dependence on temperature and scattering processes.

  10. A practical solution to reduce soft tissue artifact error at the knee using adaptive kinematic constraints.

    PubMed

    Potvin, Brigitte M; Shourijeh, Mohammad S; Smale, Kenneth B; Benoit, Daniel L

    2017-09-06

    Musculoskeletal modeling and simulations have vast potential in clinical and research fields, but face various challenges in representing the complexities of the human body. Soft tissue artifact from skin-mounted markers may lead to non-physiological representation of joint motions being used as inputs to models in simulations. To address this, we have developed adaptive joint constraints on five of the six degree of freedom of the knee joint based on in vivo tibiofemoral joint motions recorded during walking, hopping and cutting motions from subjects instrumented with intra-cortical pins inserted into their tibia and femur. The constraint boundaries vary as a function of knee flexion angle and were tested on four whole-body models including four to six knee degrees of freedom. A musculoskeletal model developed in OpenSim simulation software was constrained to these in vivo boundaries during level gait and inverse kinematics and dynamics were then resolved. Statistical parametric mapping indicated significant differences (p<0.05) in kinematics between bone pin constrained and unconstrained model conditions, notably in knee translations, while hip and ankle flexion/extension angles were also affected, indicating the error at the knee propagates to surrounding joints. These changes to hip, knee, and ankle kinematics led to measurable changes in hip and knee transverse plane moments, and knee frontal plane moments and forces. Since knee flexion angle can be validly represented using skin mounted markers, our tool uses this reliable measure to guide the five other degrees of freedom at the knee and provide a more valid representation of the kinematics for these degrees of freedom. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Reference Values for Body Composition and Anthropometric Measurements in Athletes

    PubMed Central

    Santos, Diana A.; Dawson, John A.; Matias, Catarina N.; Rocha, Paulo M.; Minderico, Cláudia S.; Allison, David B.; Sardinha, Luís B.; Silva, Analiza M.

    2014-01-01

    Background Despite the importance of body composition in athletes, reference sex- and sport-specific body composition data are lacking. We aim to develop reference values for body composition and anthropometric measurements in athletes. Methods Body weight and height were measured in 898 athletes (264 female, 634 male), anthropometric variables were assessed in 798 athletes (240 female and 558 male), and in 481 athletes (142 female and 339 male) with dual-energy X-ray absorptiometry (DXA). A total of 21 different sports were represented. Reference percentiles (5th, 25th, 50th, 75th, and 95th) were calculated for each measured value, stratified by sex and sport. Because sample sizes within a sport were often very low for some outcomes, the percentiles were estimated using a parametric, empirical Bayesian framework that allowed sharing information across sports. Results We derived sex- and sport-specific reference percentiles for the following DXA outcomes: total (whole body scan) and regional (subtotal, trunk, and appendicular) bone mineral content, bone mineral density, absolute and percentage fat mass, fat-free mass, and lean soft tissue. Additionally, we derived reference percentiles for height-normalized indexes by dividing fat mass, fat-free mass, and appendicular lean soft tissue by height squared. We also derived sex- and sport-specific reference percentiles for the following anthropometry outcomes: weight, height, body mass index, sum of skinfold thicknesses (7 skinfolds, appendicular skinfolds, trunk skinfolds, arm skinfolds, and leg skinfolds), circumferences (hip, arm, midthigh, calf, and abdominal circumferences), and muscle circumferences (arm, thigh, and calf muscle circumferences). Conclusions These reference percentiles will be a helpful tool for sports professionals, in both clinical and field settings, for body composition assessment in athletes. PMID:24830292

  12. Multiple Frequency Parametric Sonar

    DTIC Science & Technology

    2015-09-28

    300003 1 MULTIPLE FREQUENCY PARAMETRIC SONAR STATEMENT OF GOVERNMENT INTEREST [0001] The invention described herein may be manufactured and...a method for increasing the bandwidth of a parametric sonar system by using multiple primary frequencies rather than only two primary frequencies...2) Description of Prior Art [0004] Parametric sonar generates narrow beams at low frequencies by projecting sound at two distinct primary

  13. Parametric instabilities of rotor-support systems with application to industrial ventilators

    NASA Technical Reports Server (NTRS)

    Parszewski, Z.; Krodkiemski, T.; Marynowski, K.

    1980-01-01

    Rotor support systems interaction with parametric excitation is considered for both unequal principal shaft stiffness (generators) and offset disc rotors (ventilators). Instability regions and types of instability are computed in the first case, and parametric resonances in the second case. Computed and experimental results are compared for laboratory machine models. A field case study of parametric vibrations in industrial ventilators is reported. Computed parametric resonances are confirmed in field measurements, and some industrial failures are explained. Also the dynamic influence and gyroscopic effect of supporting structures are shown and computed.

  14. Parametric robust control and system identification: Unified approach

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1994-01-01

    Despite significant advancement in the area of robust parametric control, the problem of synthesizing such a controller is still a wide open problem. Thus, we attempt to give a solution to this important problem. Our approach captures the parametric uncertainty as an H(sub infinity) unstructured uncertainty so that H(sub infinity) synthesis techniques are applicable. Although the techniques cannot cope with the exact parametric uncertainty, they give a reasonable guideline to model the unstructured uncertainty that contains the parametric uncertainty. An additional loop shaping technique is also introduced to relax its conservatism.

  15. Lopsided gauge mediation

    NASA Astrophysics Data System (ADS)

    de Simone, Andrea; Franceschini, Roberto; Giudice, Gian Francesco; Pappadopulo, Duccio; Rattazzi, Riccardo

    2011-05-01

    It has been recently pointed out that the unavoidable tuning among supersymmetric parameters required to raise the Higgs boson mass beyond its experimental limit opens up new avenues for dealing with the so called μ- B μ problem of gauge mediation. In fact, it allows for accommodating, with no further parameter tuning, large values of B μ and of the other Higgs-sector soft masses, as predicted in models where both μ and B μ are generated at one-loop order. This class of models, called Lopsided Gauge Mediation, offers an interesting alternative to conventional gauge mediation and is characterized by a strikingly different phenomenology, with light higgsinos, very large Higgs pseudoscalar mass, and moderately light sleptons. We discuss general parametric relations involving the fine-tuning of the model and various observables such as the chargino mass and the value of tan β. We build an explicit model and we study the constraints coming from LEP and Tevatron. We show that in spite of new interactions between the Higgs and the messenger superfields, the theory can remain perturbative up to very large scales, thus retaining gauge coupling unification.

  16. Understanding the electron-phonon interaction in polar crystals: Perspective presented by the vibronic theory

    NASA Astrophysics Data System (ADS)

    Pishtshev, A.; Kristoffel, N.

    2017-05-01

    We outline our novel results relating to the physics of the electron-TO-phonon (el-TO-ph) interaction in a polar crystal. We explained why the el-TO-ph interaction becomes effectively strong in a ferroelectric, and showed how the electron density redistribution establishes favorable conditions for soft-behavior of the long-wavelength branch of the active TO vibration. In the context of the vibronic theory it has been demonstrated that at the macroscopic level the interaction of electrons with the polar zone-centre TO phonons can be associated with the internal long-range dipole forces. Also we elucidated a methodological issue of how local field effects are incorporated within the vibronic theory. These result provided not only substantial support for the vibronic mechanism of ferroelectricity but also presented direct evidence of equivalence between vibronic and the other lattice dynamics models. The corresponding comparison allowed us to introduce the original parametrization for constants of the vibronic interaction in terms of key material constants. The applicability of the suggested formula has been tested for a wide class of polar materials.

  17. Virtuality Distributions and γγ * -> π 0 Transition at Handbag Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radyushkin, Anatoly V.

    2015-09-01

    We outline a new approach to transverse momentum dependence in hard processes using as an example the exclusive transitionmore » $${\\gamma^{*}\\gamma \\to \\pi^{0}}$$ at the handbag level. We start with the coordinate representation for a matrix element $${\\langle p |{\\cal O}(0,z) |0 \\rangle}$$ of a bilocal operator $${{\\cal O} (0,z)}$$ describing a hadron with momentum p. Treated as a function of (pz) and z$$^{2}$$, it is parametrized through virtuality distribution amplitude (VDA) Φ (x, σ), with x being Fourier-conjugate to (pz) and σ Laplace-conjugate to z$$^{2}$$. For intervals with z$$^{+}$$ = 0, we introduce the transverse momentum distribution amplitude (TMDA) $${\\Ψ (x,k_{\\perp})}$$ , and write it in terms of VDA Φ (x, σ). The results of covariant calculations, written in terms of Φ (x, σ) are converted into expressions involving $${\\Ψ (x,k_{\\perp})}$$ . We propose simple models for soft VDAs/TMDAs, and use them for comparison of handbag results with experimental (BaBar and BELLE) data on the pion transition form factor.« less

  18. Resonant production of dark photons in positron beam dump experiments

    NASA Astrophysics Data System (ADS)

    Nardi, Enrico; Carvajal, Cristian D. R.; Ghoshal, Anish; Meloni, Davide; Raggi, Mauro

    2018-05-01

    Positrons beam dump experiments have unique features to search for very narrow resonances coupled superweakly to e+e- pairs. Due to the continued loss of energy from soft photon bremsstrahlung, in the first few radiation lengths of the dump a positron beam can continuously scan for resonant production of new resonances via e+ annihilation off an atomic e- in the target. In the case of a dark photon A' kinetically mixed with the photon, this production mode is of first order in the electromagnetic coupling α , and thus parametrically enhanced with respect to the O (α2)e+e-→γ A' production mode and to the O (α3)A' bremsstrahlung in e- -nucleon scattering so far considered. If the lifetime is sufficiently long to allow the A' to exit the dump, A'→e+e- decays could be easily detected and distinguished from backgrounds. We explore the foreseeable sensitivity of the Frascati PADME experiment in searching with this technique for the 17 MeV dark photon invoked to explain the Be 8 anomaly in nuclear transitions.

  19. Scattering and sequestering of blow-up moduli in local string models

    NASA Astrophysics Data System (ADS)

    Conlon, Joseph P.; Witkowski, Lukas T.

    2011-12-01

    We study the scattering and sequestering of blow-up fields - either local to or distant from a visible matter sector - through a CFT computation of the dependence of physical Yukawa couplings on the blow-up moduli. For a visible sector of D3-branes on orbifold singularities we compute the disk correlator left< {tau_s^{{(1)}}tau_s^{{(2)}}...tau_s^{{(n)}}ψ ψ φ } rightrangle between orbifold blow-up moduli and matter Yukawa couplings. For n = 1 we determine the full quantum and classical correlator. This result has the correct factorisation onto lower 3-point functions and also passes numerous other consistency checks. For n > 1 we show that the structure of picture-changing applied to the twist operators establishes the sequestering of distant blow-up moduli at disk level to all orders in α'. We explain how these results are relevant to suppressing soft terms to scales parametrically below the gravitino mass. By giving vevs to the blow-up fields we can move into the smooth limit and thereby derive CFT results for the smooth Swiss-cheese Calabi-Yaus that appear in the Large Volume Scenario.

  20. Diffraction of real and virtual photons in a pyrolytic graphite crystal as source of intensive quasimonochromatic X-ray beam

    NASA Astrophysics Data System (ADS)

    Bogomazova, E. A.; Kalinin, B. N.; Naumenko, G. A.; Padalko, D. V.; Potylitsyn, A. P.; Sharafutdinov, A. F.; Vnukov, I. E.

    2003-01-01

    A series of experiments on the parametric X-rays radiation (PXR) generation and radiation soft component diffraction of relativistic electrons in pyrolytic graphite (PG) crystals have been carried out at the Tomsk synchrotron. It is shown that the experimental results with PG crystals are explained by the kinematic PXR theory if we take into account a contribution of the real photons diffraction (transition radiation, bremsstrahlung and PXR photons as well). The measurements of the emission spectrum of channeled electrons in the photon energy range much smaller than the characteristic energy of channeling radiation have been performed with a crystal-diffraction spectrometer. For electrons incident along the <1 1 0> axis of a silicon crystal, the radiation intensity in the energy range 30⩽ ω⩽360 keV exceeds the bremsstrahlung one almost by an order of magnitude. Different possibilities to create an effective source of the monochromatic X-ray beam based on the real and virtual photons diffraction in the PG crystals have been considered.

  1. A simulation-optimization model for Stone column-supported embankment stability considering rainfall effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deb, Kousik, E-mail: kousik@civil.iitkgp.ernet.in; Dhar, Anirban, E-mail: anirban@civil.iitkgp.ernet.in; Purohit, Sandip, E-mail: sandip.purohit91@gmail.com

    Landslide due to rainfall has been and continues to be one of the most important concerns of geotechnical engineering. The paper presents the variation of factor of safety of stone column-supported embankment constructed over soft soil due to change in water level for an incessant period of rainfall. A combined simulation-optimization based methodology has been proposed to predict the critical surface of failure of the embankment and to optimize the corresponding factor of safety under rainfall conditions using an evolutionary genetic algorithm NSGA-II (Non-Dominated Sorted Genetic Algorithm-II). It has been observed that the position of water table can be reliablymore » estimated with varying periods of infiltration using developed numerical method. The parametric study is presented to study the optimum factor of safety of the embankment and its corresponding critical failure surface under the steady-state infiltration condition. Results show that in case of floating stone columns, period of infiltration has no effect on factor of safety. Even critical failure surfaces for a particular floating column length remain same irrespective of rainfall duration.« less

  2. Steady-state dynamic behavior of an auxiliary bearing supported rotor system

    NASA Technical Reports Server (NTRS)

    Xie, Huajun; Flowers, George T.; Lawrence, Charles

    1995-01-01

    This paper investigates the steady-state responses of a rotor system supported by auxiliary bearings in which there is a clearance between the rotor and the inner race of the bearing. A simulation model based upon the rotor of a production jet engine is developed and its steady-state behavior is explored over a wide range of operating conditions for various parametric configurations. Specifically, the influence of rotor imbalance, support stiffness, and damping is studied. It is found that imbalance may change the rotor responses dramatically in terms of frequency contents at certain operating speeds. Subharmonic responses of 2nd order through 10th order are all observed except the 9th order. Chaotic phenomenon is also observed. Jump phenomena (or double-valued responses) of both hard-spring type and soft-spring type are shown to occur at low operating speeds for systems with low auxiliary bearing damping or large clearance even with relatively small imbalance. The effect of friction between the shaft and the inner race of the bearing is also discussed.

  3. Generation of Well-Relaxed All-Atom Models of Large Molecular Weight Polymer Melts: A Hybrid Particle-Continuum Approach Based on Particle-Field Molecular Dynamics Simulations.

    PubMed

    De Nicola, Antonio; Kawakatsu, Toshihiro; Milano, Giuseppe

    2014-12-09

    A procedure based on Molecular Dynamics (MD) simulations employing soft potentials derived from self-consistent field (SCF) theory (named MD-SCF) able to generate well-relaxed all-atom structures of polymer melts is proposed. All-atom structures having structural correlations indistinguishable from ones obtained by long MD relaxations have been obtained for poly(methyl methacrylate) (PMMA) and poly(ethylene oxide) (PEO) melts. The proposed procedure leads to computational costs mainly related on system size rather than to the chain length. Several advantages of the proposed procedure over current coarse-graining/reverse mapping strategies are apparent. No parametrization is needed to generate relaxed structures of different polymers at different scales or resolutions. There is no need for special algorithms or back-mapping schemes to change the resolution of the models. This characteristic makes the procedure general and its extension to other polymer architectures straightforward. A similar procedure can be easily extended to the generation of all-atom structures of block copolymer melts and polymer nanocomposites.

  4. Seismic response of elevated rectangular water tanks considering soil structure interaction

    NASA Astrophysics Data System (ADS)

    Visuvasam, J.; Simon, J.; Packiaraj, J. S.; Agarwal, R.; Goyal, L.; Dhingra, V.

    2017-11-01

    The overhead staged water tanks are susceptible for high lateral forces during earthquakes. Due to which, the failure of beam-columns joints, framing elements and toppling of tanks arise. To avoid such failures, they are analyzed and designed for lateral forced induced by devastating earthquakes assuming the base of the structures are fixed and considering functional needs, response reduction, soil types and severity of ground shaking. In this paper, the flexible base was provided as spring stiffness in order to consider the effect of soil properties on the seismic behaviour of water tanks. A linear time history earthquake analysis was performed using SAP2000. Parametric studies have been carried out based on various types of soils such as soft, medium and hard. The soil stiffness values highly influence the time period and base shear of the structure. The ratios of time period of flexible to fixed base and base shear of flexible to fixed base were observed against capacities of water tank and the overall height of the system. The both responses are found to be increased as the flexibility of soil medium decreases

  5. Reliability assessment of serviceability performance of braced retaining walls using a neural network approach

    NASA Astrophysics Data System (ADS)

    Goh, A. T. C.; Kulhawy, F. H.

    2005-05-01

    In urban environments, one major concern with deep excavations in soft clay is the potentially large ground deformations in and around the excavation. Excessive movements can damage adjacent buildings and utilities. There are many uncertainties associated with the calculation of the ultimate or serviceability performance of a braced excavation system. These include the variabilities of the loadings, geotechnical soil properties, and engineering and geometrical properties of the wall. A risk-based approach to serviceability performance failure is necessary to incorporate systematically the uncertainties associated with the various design parameters. This paper demonstrates the use of an integrated neural network-reliability method to assess the risk of serviceability failure through the calculation of the reliability index. By first performing a series of parametric studies using the finite element method and then approximating the non-linear limit state surface (the boundary separating the safe and failure domains) through a neural network model, the reliability index can be determined with the aid of a spreadsheet. Two illustrative examples are presented to show how the serviceability performance for braced excavation problems can be assessed using the reliability index.

  6. Final Report on ITER Task Agreement 81-08

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard L. Moore

    As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of themore » ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.« less

  7. A Dictionary Learning Approach with Overlap for the Low Dose Computed Tomography Reconstruction and Its Vectorial Application to Differential Phase Tomography

    PubMed Central

    Mirone, Alessandro; Brun, Emmanuel; Coan, Paola

    2014-01-01

    X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing norm of the patch basis functions coefficients, and a coefficient multiplying the norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography. PMID:25531987

  8. WE-G-18A-03: Cone Artifacts Correction in Iterative Cone Beam CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Folkerts, M; Jiang, S

    Purpose: For iterative reconstruction (IR) in cone-beam CT (CBCT) imaging, data truncation along the superior-inferior (SI) direction causes severe cone artifacts in the reconstructed CBCT volume images. Not only does it reduce the effective SI coverage of the reconstructed volume, it also hinders the IR algorithm convergence. This is particular a problem for regularization based IR, where smoothing type regularization operations tend to propagate the artifacts to a large area. It is our purpose to develop a practical cone artifacts correction solution. Methods: We found it is the missing data residing in the truncated cone area that leads to inconsistencymore » between the calculated forward projections and measured projections. We overcome this problem by using FDK type reconstruction to estimate the missing data and design weighting factors to compensate the inconsistency caused by the missing data. We validate the proposed methods in our multi-GPU low-dose CBCT reconstruction system on multiple patients' datasets. Results: Compared to the FDK reconstruction with full datasets, while IR is able to reconstruct CBCT images using a subset of projection data, the severe cone artifacts degrade overall image quality. For head-neck case under a full-fan mode, 13 out of 80 slices are contaminated. It is even more severe in pelvis case under half-fan mode, where 36 out of 80 slices are affected, leading to inferior soft-tissue delineation. By applying the proposed method, the cone artifacts are effectively corrected, with a mean intensity difference decreased from ∼497 HU to ∼39HU for those contaminated slices. Conclusion: A practical and effective solution for cone artifacts correction is proposed and validated in CBCT IR algorithm. This study is supported in part by NIH (1R01CA154747-01)« less

  9. A dictionary learning approach with overlap for the low dose computed tomography reconstruction and its vectorial application to differential phase tomography.

    PubMed

    Mirone, Alessandro; Brun, Emmanuel; Coan, Paola

    2014-01-01

    X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing L1 norm of the patch basis functions coefficients, and a coefficient multiplying the L2 norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography.

  10. Establishing the biomechanical properties of the pelvic soft tissues through an inverse finite element analysis using magnetic resonance imaging.

    PubMed

    Silva, M E T; Brandão, S; Parente, M P L; Mascarenhas, T; Natal Jorge, R M

    2016-04-01

    The mechanical characteristics of the female pelvic floor are relevant when explaining pelvic dysfunction. The decreased elasticity of the tissue often causes inability to maintain urethral position, also leading to vaginal and rectal descend when coughing or defecating as a response to an increase in the internal abdominal pressure. These conditions can be associated with changes in the mechanical properties of the supportive structures-namely, the pelvic floor muscles-including impairment. In this work, we used an inverse finite element analysis to calculate the material constants for the passive mechanical behavior of the pelvic floor muscles. The numerical model of the pelvic floor muscles and bones was built from magnetic resonance axial images acquired at rest. Muscle deformation, simulating the Valsalva maneuver with a pressure of 4 KPa, was compared with the muscle displacement obtained through additional dynamic magnetic resonance imaging. The difference in displacement was of 0.15 mm in the antero-posterior direction and 3.69 mm in the supero-inferior direction, equating to a percentage error of 7.0% and 16.9%, respectively. We obtained the shortest difference in the displacements using an iterative process that reached the material constants for the Mooney-Rivlin constitutive model (c10=11.8 KPa and c20=5.53 E-02 KPa). For each iteration, the orthogonal distance between each node from the group of nodes which defined the puborectal muscle in the numerical model versus dynamic magnetic resonance imaging was computed. With the methodology used in this work, it was possible to obtain in vivo biomechanical properties of the pelvic floor muscles for a specific subject using input information acquired non-invasively. © IMechE 2016.

  11. EBIT spectroscopy of highly charged heavy ions relevant to hot plasmas

    NASA Astrophysics Data System (ADS)

    Nakamura, Nobuyuki

    2013-05-01

    An electron beam ion trap (EBIT) is a versatile device for studying highly charged ions. We have been using two types of EBITs for the spectroscopic studies of highly charged ions. One is a high-energy device called the Tokyo-EBIT, and another is a compact low-energy device called CoBIT. Complementary use of them enables us to obtain spectroscopic data for ions over a wide charge-state range interacting with electrons over a wide energy range. In this talk, we present EBIT spectra of highly charged ions for tungsten, iron, bismuth, etc., which are relevant to hot plasmas. Tungsten is considered to be the main impurity in the ITER (the next generation nuclear fusion reactor) plasma, and thus its emission lines are important for diagnosing and controlling the ITER plasma. We have observed many previously unreported lines to supply the lack of spectroscopic data of tungsten ions. Iron is one of the main components of the solar corona, and its spectra are used to diagnose temperature, density, etc. The diagnostics is usually done by comparing observed spectra with model calculations. An EBIT can provide spectra under a well-defined condition; they are thus useful to test the model calculations. Laser-produced bismuth plasma is one of the candidates for a soft x-ray source in the water window region. An EBIT has a narrow charge state distribution; it is thus useful to disentangle the spectra of laser-produced plasma containing ions with a wide charge-state range. Performed with the support and under the auspices of the NIFS Collaboration Research program (NIFS09KOAJ003) and JSPS KAKENHI Number 23246165, and partly supported by the JSPS-NRF-NSFC A3 Foresight Program in the field of Plasma Physics.

  12. Imaging of Arthroplasties: Improved Image Quality and Lesion Detection With Iterative Metal Artifact Reduction, a New CT Metal Artifact Reduction Technique.

    PubMed

    Subhas, Naveen; Polster, Joshua M; Obuchowski, Nancy A; Primak, Andrew N; Dong, Frank F; Herts, Brian R; Iannotti, Joseph P

    2016-08-01

    The purpose of this study was to compare iterative metal artifact reduction (iMAR), a new single-energy metal artifact reduction technique, with filtered back projection (FBP) in terms of attenuation values, qualitative image quality, and streak artifacts near shoulder and hip arthroplasties and observer ability with these techniques to detect pathologic lesions near an arthroplasty in a phantom model. Preoperative and postoperative CT scans of 40 shoulder and 21 hip arthroplasties were reviewed. All postoperative scans were obtained using the same technique (140 kVp, 300 quality reference mAs, 128 × 0.6 mm detector collimation) on one of three CT scanners and reconstructed with FBP and iMAR. The attenuation differences in bones and soft tissues between preoperative and postoperative scans at the same location were compared; image quality and streak artifact for both reconstructions were qualitatively graded by two blinded readers. Observer ability and confidence to detect lesions near an arthroplasty in a phantom model were graded. For both readers, iMAR had more accurate attenuation values (p < 0.001), qualitatively better image quality (p < 0.001), and less streak artifact (p < 0.001) in all locations near arthroplasties compared with FBP. Both readers detected more lesions (p ≤ 0.04) with higher confidence (p ≤ 0.01) with iMAR than with FBP in the phantom model. The iMAR technique provided more accurate attenuation values, better image quality, and less streak artifact near hip and shoulder arthroplasties than FBP; iMAR also increased observer ability and confidence to detect pathologic lesions near arthroplasties in a phantom model.

  13. Potential benefit of the CT adaptive statistical iterative reconstruction method for pediatric cardiac diagnosis

    NASA Astrophysics Data System (ADS)

    Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2010-04-01

    Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.

  14. ITER Construction—Plant System Integration

    NASA Astrophysics Data System (ADS)

    Tada, E.; Matsuda, S.

    2009-02-01

    This brief paper introduces how the ITER will be built in the international collaboration. The ITER Organization plays a central role in constructing ITER and leading it into operation. Since most of the ITER components are to be provided in-kind from the member countries, integral project management should be scoped in advance of real work. Those include design, procurement, system assembly, testing, licensing and commissioning of ITER.

  15. Joint Carrier-Phase Synchronization and LDPC Decoding

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Valles, Esteban

    2009-01-01

    A method has been proposed to increase the degree of synchronization of a radio receiver with the phase of a suppressed carrier signal modulated with a binary- phase-shift-keying (BPSK) or quaternary- phase-shift-keying (QPSK) signal representing a low-density parity-check (LDPC) code. This method is an extended version of the method described in Using LDPC Code Constraints to Aid Recovery of Symbol Timing (NPO-43112), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 54. Both methods and the receiver architectures in which they would be implemented belong to a class of timing- recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. The proposed method calls for the use of what is known in the art as soft decision feedback to remove the modulation from a replica of the incoming signal prior to feeding this replica to a phase-locked loop (PLL) or other carrier-tracking stage in the receiver. Soft decision feedback refers to suitably processed versions of intermediate results of iterative computations involved in the LDPC decoding process. Unlike a related prior method in which hard decision feedback (the final sequence of decoded symbols) is used to remove the modulation, the proposed method does not require estimation of the decoder error probability. In a basic digital implementation of the proposed method, the incoming signal (having carrier phase theta theta (sub c) plus noise would first be converted to inphase (I) and quadrature (Q) baseband signals by mixing it with I and Q signals at the carrier frequency [wc/(2 pi)] generated by a local oscillator. The resulting demodulated signals would be processed through one-symbol-period integrate and- dump filters, the outputs of which would be sampled and held, then multiplied by a soft-decision version of the baseband modulated signal. The resulting I and Q products consist of terms proportional to the cosine and sine of the carrier phase cc as well as correlated noise components. These products would be fed as inputs to a digital PLL that would include a number-controlled oscillator (NCO), which provides an estimate of the carrier phase, theta(sub c).

  16. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.

    2014-08-21

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less

  17. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    NASA Astrophysics Data System (ADS)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.

    2014-08-01

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.

  18. Why preferring parametric forecasting to nonparametric methods?

    PubMed

    Jabot, Franck

    2015-05-07

    A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Parametric and non-parametric approach for sensory RATA (Rate-All-That-Apply) method of ledre profile attributes

    NASA Astrophysics Data System (ADS)

    Hastuti, S.; Harijono; Murtini, E. S.; Fibrianto, K.

    2018-03-01

    This current study is aimed to investigate the use of parametric and non-parametric approach for sensory RATA (Rate-All-That-Apply) method. Ledre as Bojonegoro unique local food product was used as point of interest, in which 319 panelists were involved in the study. The result showed that ledre is characterized as easy-crushed texture, sticky in mouth, stingy sensation and easy to swallow. It has also strong banana flavour with brown in colour. Compared to eggroll and semprong, ledre has more variances in terms of taste as well the roll length. As RATA questionnaire is designed to collect categorical data, non-parametric approach is the common statistical procedure. However, similar results were also obtained as parametric approach, regardless the fact of non-normal distributed data. Thus, it suggests that parametric approach can be applicable for consumer study with large number of respondents, even though it may not satisfy the assumption of ANOVA (Analysis of Variances).

  20. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.

  1. Parametric instability induced by X-mode wave heating at EISCAT

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Zhou, Chen; Liu, Moran; Honary, Farideh; Ni, Binbin; Zhao, Zhengyu

    2016-10-01

    In this paper, we present results of parametric instability induced by X-mode wave heating observed by EISCAT (European Incoherent Scatter Scientific Association) radar at Tromsø, Norway. Three typical X-mode ionospheric heating experiments on 22 October 2013, 19 October 2012, and 21 February 2013 are investigated in details. Both parametric decay instability (PDI) and oscillating two-stream instability are observed during the X-mode heating period. We suggest that the full dispersion relationship of the Langmuir wave can be employed to analyze the X-mode parametric instability excitation. A modified kinetic electron distribution is proposed and analyzed, which is able to satisfy the matching condition of parametric instability excitation. Parallel electric field component of X-mode heating wave can also exceed the parametric instability excitation threshold under certain conditions.

  2. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  3. Optical parametric amplification and oscillation assisted by low-frequency stimulated emission.

    PubMed

    Longhi, Stefano

    2016-04-15

    Optical parametric amplification and oscillation provide powerful tools for coherent light generation in spectral regions inaccessible to lasers. Parametric gain is based on a frequency down-conversion process and, thus, it cannot be realized for signal waves at a frequency ω3 higher than the frequency of the pump wave ω1. In this Letter, we suggest a route toward the realization of upconversion optical parametric amplification and oscillation, i.e., amplification of the signal wave by a coherent pump wave of lower frequency, assisted by stimulated emission of the auxiliary idler wave. When the signal field is resonated in an optical cavity, parametric oscillation is obtained. Design parameters for the observation of upconversion optical parametric oscillation at λ3=465 nm are given for a periodically poled lithium-niobate (PPLN) crystal doped with Nd(3+) ions.

  4. OPCPA front end and contrast optimization for the OMEGA EP kilojoule, picosecond laser

    DOE PAGES

    Dorrer, C.; Consentino, A.; Irwin, D.; ...

    2015-09-01

    OMEGA EP is a large-scale laser system that combines optical parametric amplification and solid-state laser amplification on two beamlines to deliver high-intensity, high-energy optical pulses. The temporal contrast of the output pulse is limited by the front-end parametric fluorescence and other features that are specific to parametric amplification. The impact of the two-crystal parametric preamplifier, pump-intensity noise, and pump-signal timing is experimentally studied. The implementation of a parametric amplifier pumped by a short pump pulse before stretching, further amplification, and recompression to enhance the temporal contrast of the high-energy short pulse is described.

  5. Hybrid chirped pulse amplification system

    DOEpatents

    Barty, Christopher P.; Jovanovic, Igor

    2005-03-29

    A hybrid chirped pulse amplification system wherein a short-pulse oscillator generates an oscillator pulse. The oscillator pulse is stretched to produce a stretched oscillator seed pulse. A pump laser generates a pump laser pulse. The stretched oscillator seed pulse and the pump laser pulse are directed into an optical parametric amplifier producing an optical parametric amplifier output amplified signal pulse and an optical parametric amplifier output unconverted pump pulse. The optical parametric amplifier output amplified signal pulse and the optical parametric amplifier output laser pulse are directed into a laser amplifier producing a laser amplifier output pulse. The laser amplifier output pulse is compressed to produce a recompressed hybrid chirped pulse amplification pulse.

  6. Estimating technical efficiency in the hospital sector with panel data: a comparison of parametric and non-parametric techniques.

    PubMed

    Siciliani, Luigi

    2006-01-01

    Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.

  7. A numerical study on piezoelectric energy harvesting by combining transverse galloping and parametric instability phenomena

    NASA Astrophysics Data System (ADS)

    Franzini, Guilherme Rosa; Santos, Rebeca Caramêz Saraiva; Pesce, Celso Pupo

    2017-12-01

    This paper aims to numerically investigate the effects of parametric instability on piezoelectric energy harvesting from the transverse galloping of a square prism. A two degrees-of-freedom reduced-order model for this problem is proposed and numerically integrated. A usual quasi-steady galloping model is applied, where the transverse force coefficient is adopted as a cubic polynomial function with respect to the angle of attack. Time-histories of nondimensional prism displacement, electric voltage and power dissipated at both the dashpot and the electrical resistance are obtained as functions of the reduced velocity. Both, oscillation amplitude and electric voltage, increased with the reduced velocity for all parametric excitation conditions tested. For low values of reduced velocity, 2:1 parametric excitation enhances the electric voltage. On the other hand, for higher reduced velocities, a 1:1 parametric excitation (i.e., the same as the natural frequency) enhances both oscillation amplitude and electric voltage. It has been also found that, depending on the parametric excitation frequency, the harvested electrical power can be amplified in 70% when compared to the case under no parametric excitation.

  8. Parametric Modelling of As-Built Beam Framed Structure in Bim Environment

    NASA Astrophysics Data System (ADS)

    Yang, X.; Koehl, M.; Grussenmeyer, P.

    2017-02-01

    A complete documentation and conservation of a historic timber roof requires the integration of geometry modelling, attributional and dynamic information management and results of structural analysis. Recently developed as-built Building Information Modelling (BIM) technique has the potential to provide a uniform platform, which provides possibility to integrate the traditional geometry modelling, parametric elements management and structural analysis together. The main objective of the project presented in this paper is to develop a parametric modelling tool for a timber roof structure whose elements are leaning and crossing beam frame. Since Autodesk Revit, as the typical BIM software, provides the platform for parametric modelling and information management, an API plugin, able to automatically create the parametric beam elements and link them together with strict relationship, was developed. The plugin under development is introduced in the paper, which can obtain the parametric beam model via Autodesk Revit API from total station points and terrestrial laser scanning data. The results show the potential of automatizing the parametric modelling by interactive API development in BIM environment. It also integrates the separate data processing and different platforms into the uniform Revit software.

  9. Multibeam Formation with a Parametric Sonar

    DTIC Science & Technology

    1976-03-05

    AD-A022 815 MULTIBEAM FORMATION WITH A PARAMETRIC SONAR Robert L. White Texas University at Austin Prepared for: Office of Naval Research 5 March...PARAMETRIC SONAR Final Report under Contract N00014-70-A-0166, Task 0020 1 February - 31 July 1974 Robe&, L. White OFFICE OF NAVAL RESEARCH Contract N00014...78712 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. r-X: ~ ... ABSTRACT Parametric sonar has proven to be an effective concept in sonar

  10. Combined non-parametric and parametric approach for identification of time-variant systems

    NASA Astrophysics Data System (ADS)

    Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz

    2018-03-01

    Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.

  11. Parametric Cooling of Ultracold Atoms

    NASA Astrophysics Data System (ADS)

    Boguslawski, Matthew; Bharath, H. M.; Barrios, Maryrose; Chapman, Michael

    2017-04-01

    An oscillator is characterized by a restoring force which determines the natural frequency at which oscillations occur. The amplitude and phase-noise of these oscillations can be amplified or squeezed by modulating the magnitude of this force (e.g. the stiffness of the spring) at twice the natural frequency. This is parametric excitation; a long-studied phenomena in both the classical and quantum regimes. Parametric cooling, or the parametric squeezing of thermo-mechanical noise in oscillators has been studied in micro-mechanical oscillators and trapped ions. We study parametric cooling in ultracold atoms. This method shows a modest reduction of the variance of atomic momenta, and can be easily employed with pre-existing controls in many experiments. Parametric cooling is comparable to delta-kicked cooling, sharing similar limitations. We expect this cooling to find utility in microgravity experiments where the experiment duration is limited by atomic free expansion.

  12. Convergence Results on Iteration Algorithms to Linear Systems

    PubMed Central

    Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo

    2014-01-01

    In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640

  13. Pulsed laser deposition to synthesize the bridge structure of artificial nacre: Comparison of nano- and femtosecond lasers

    NASA Astrophysics Data System (ADS)

    Melaibari, Ammar A.; Molian, Pal

    2012-11-01

    Nature offers inspiration to new adaptive technologies that allow us to build amazing shapes and structures such as nacre using synthetic materials. Consequently, we have designed a pulsed laser ablation manufacturing process involving thin film deposition and micro-machining to create hard/soft layered "brick-bridge-mortar" nacre of AlMgB14 (hard phase) with Ti (soft phase). In this paper, we report pulsed laser deposition (PLD) to mimic brick and bridge structures of natural nacre in AlMgB14. Particulate formation inherent in PLD is exploited to develop the bridge structure. Mechanical behavior analysis of the AlMgB14/Ti system revealed that the brick is to be 250 nm thick, 9 μm lateral dimensions while the bridge (particle) is to have a diameter of 500 nm for a performance equivalent to natural nacre. Both nanosecond (ns) and femtosecond (fs) pulsed lasers were employed for PLD in an iterative approach that involves varying pulse energy, pulse repetition rate, and target-to-substrate distance to achieve the desired brick and bridge characteristics. Scanning electron microscopy, x-ray photoelectron spectroscopy, and optical profilometer were used to evaluate the film thickness, particle size and density, stoichiometry, and surface roughness of thin films. Results indicated that both ns-pulsed and fs-pulsed lasers produce the desired nacre features. However, each laser may be chosen for different reasons: fs-pulsed laser is preferred for much shorter deposition time, better stoichiometry, uniform-sized particles, and uniform film thickness, while ns-pulsed laser is favored for industrial acceptance, reliability, ease of handling, and low cost.

  14. Cone-beam CT image contrast and attenuation-map linearity improvement (CALI) for brain stereotactic radiosurgery procedures

    NASA Astrophysics Data System (ADS)

    Hashemi, Sayed Masoud; Lee, Young; Eriksson, Markus; Nordström, Hâkan; Mainprize, James; Grouza, Vladimir; Huynh, Christopher; Sahgal, Arjun; Song, William Y.; Ruschin, Mark

    2017-03-01

    A Contrast and Attenuation-map (CT-number) Linearity Improvement (CALI) framework is proposed for cone-beam CT (CBCT) images used for brain stereotactic radiosurgery (SRS). The proposed framework is used together with our high spatial resolution iterative reconstruction algorithm and is tailored for the Leksell Gamma Knife ICON (Elekta, Stockholm, Sweden). The incorporated CBCT system in ICON facilitates frameless SRS planning and treatment delivery. The ICON employs a half-cone geometry to accommodate the existing treatment couch. This geometry increases the amount of artifacts and together with other physical imperfections causes image inhomogeneity and contrast reduction. Our proposed framework includes a preprocessing step, involving a shading and beam-hardening artifact correction, and a post-processing step to correct the dome/capping artifact caused by the spatial variations in x-ray energy generated by bowtie-filter. Our shading correction algorithm relies solely on the acquired projection images (i.e. no prior information required) and utilizes filtered-back-projection (FBP) reconstructed images to generate a segmented bone and soft-tissue map. Ideal projections are estimated from the segmented images and a smoothed version of the difference between the ideal and measured projections is used in correction. The proposed beam-hardening and dome artifact corrections are segmentation free. The CALI was tested on CatPhan, as well as patient images acquired on the ICON system. The resulting clinical brain images show substantial improvements in soft contrast visibility, revealing structures such as ventricles and lesions which were otherwise un-detectable in FBP-reconstructed images. The linearity of the reconstructed attenuation-map was also improved, resulting in more accurate CT#.

  15. Automatic determination of the artery vein ratio in retinal images

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; van Ginneken, Bram; Abràmoff, Michael D.

    2010-03-01

    A lower ratio between the width of the arteries and veins (Arteriolar-to-Venular diameter Ratio, AVR) on the retina, is well established to be predictive of stroke and other cardiovascular events in adults, as well as an increased risk of retinopathy of prematurity in premature infants. This work presents an automatic method that detects the location of the optic disc, determines the appropriate region of interest (ROI), classifies the vessels in the ROI into arteries and veins, measures their widths and calculates the AVR. After vessel segmentation and vessel width determination the optic disc is located and the system eliminates all vessels outside the AVR measurement ROI. The remaining vessels are thinned, vessel crossing and bifurcation points are removed leaving a set of vessel segments containing centerline pixels. Features are extracted from each centerline pixel that are used to assign them a soft label indicating the likelihood the pixel is part of a vein. As all centerline pixels in a connected segment should be the same type, the median soft label is assigned to each centerline pixel in the segment. Next artery vein pairs are matched using an iterative algorithm and the widths of the vessels is used to calculate the AVR. We train and test the algorithm using a set of 25 high resolution digital color fundus photographs a reference standard that indicates for the major vessels in the images whether they are an artery or a vein. We compared the AVR values produced by our system with those determined using a computer assisted method in 15 high resolution digital color fundus photographs and obtained a correlation coefficient of 0.881.

  16. Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-01-01

    This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.

  17. An improvement of quantum parametric methods by using SGSA parameterization technique and new elementary parametric functionals

    NASA Astrophysics Data System (ADS)

    Sánchez, M.; Oldenhof, M.; Freitez, J. A.; Mundim, K. C.; Ruette, F.

    A systematic improvement of parametric quantum methods (PQM) is performed by considering: (a) a new application of parameterization procedure to PQMs and (b) novel parametric functionals based on properties of elementary parametric functionals (EPF) [Ruette et al., Int J Quantum Chem 2008, 108, 1831]. Parameterization was carried out by using the simplified generalized simulated annealing (SGSA) method in the CATIVIC program. This code has been parallelized and comparison with MOPAC/2007 (PM6) and MINDO/SR was performed for a set of molecules with C=C, C=H, and H=H bonds. Results showed better accuracy than MINDO/SR and MOPAC-2007 for a selected trial set of molecules.

  18. Overview of International Thermonuclear Experimental Reactor (ITER) engineering design activities*

    NASA Astrophysics Data System (ADS)

    Shimomura, Y.

    1994-05-01

    The International Thermonuclear Experimental Reactor (ITER) [International Thermonuclear Experimental Reactor (ITER) (International Atomic Energy Agency, Vienna, 1988), ITER Documentation Series, No. 1] project is a multiphased project, presently proceeding under the auspices of the International Atomic Energy Agency according to the terms of a four-party agreement among the European Atomic Energy Community (EC), the Government of Japan (JA), the Government of the Russian Federation (RF), and the Government of the United States (US), ``the Parties.'' The ITER project is based on the tokamak, a Russian invention, and has since been brought to a high level of development in all major fusion programs in the world. The objective of ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER design is being developed, with support from the Parties' four Home Teams and is in progress by the Joint Central Team. An overview of ITER Design activities is presented.

  19. Orbit Transfer Rocket Engine Technology Program, Advanced Engine Study Task D.6

    DTIC Science & Technology

    1992-02-28

    l!J~iliiJl 1. Report No. 2. Government Accession No. 3 . Recipient’s Catalog No. NASA 187215 4. Title and Subtitle 5. Report Date ORBIT TRANSFER ROCKET...Engine Study, three primary subtasks were accomplished: 1) Design and Parametric Data, 2) Engine Requirement Variation Studies, and 3 ) Vehicle Study...Mixture Ratio Parametrics 18 3 . Thrust Parametrics Off-Design Mixture Ratio Scans 22 4. Expansion Area Ratio Parametrics 24 5. OTV 20 klbf Engine Off

  20. Characterization of a multimode coplanar waveguide parametric amplifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simoen, M., E-mail: simoen@chalmers.se; Krantz, P.; Bylander, Jonas

    2015-10-21

    We characterize a Josephson parametric amplifier based on a flux-tunable quarter-wavelength resonator. The fundamental resonance frequency is ∼1 GHz, but we use higher modes of the resonator for our measurements. An on-chip tuning line allows for magnetic flux pumping of the amplifier. We investigate and compare degenerate parametric amplification, involving a single mode, and nondegenerate parametric amplification, using a pair of modes. We show that we reach quantum-limited noise performance in both cases.

Top