Seismic waveform tomography with shot-encoding using a restarted L-BFGS algorithm.
Rao, Ying; Wang, Yanghua
2017-08-17
In seismic waveform tomography, or full-waveform inversion (FWI), one effective strategy used to reduce the computational cost is shot-encoding, which encodes all shots randomly and sums them into one super shot to significantly reduce the number of wavefield simulations in the inversion. However, this process will induce instability in the iterative inversion regardless of whether it uses a robust limited-memory BFGS (L-BFGS) algorithm. The restarted L-BFGS algorithm proposed here is both stable and efficient. This breakthrough ensures, for the first time, the applicability of advanced FWI methods to three-dimensional seismic field data. In a standard L-BFGS algorithm, if the shot-encoding remains unchanged, it will generate a crosstalk effect between different shots. This crosstalk effect can only be suppressed by employing sufficient randomness in the shot-encoding. Therefore, the implementation of the L-BFGS algorithm is restarted at every segment. Each segment consists of a number of iterations; the first few iterations use an invariant encoding, while the remainder use random re-coding. This restarted L-BFGS algorithm balances the computational efficiency of shot-encoding, the convergence stability of the L-BFGS algorithm, and the inversion quality characteristic of random encoding in FWI.
Convergence Time towards Periodic Orbits in Discrete Dynamical Systems
San Martín, Jesús; Porter, Mason A.
2014-01-01
We investigate the convergence towards periodic orbits in discrete dynamical systems. We examine the probability that a randomly chosen point converges to a particular neighborhood of a periodic orbit in a fixed number of iterations, and we use linearized equations to examine the evolution near that neighborhood. The underlying idea is that points of stable periodic orbit are associated with intervals. We state and prove a theorem that details what regions of phase space are mapped into these intervals (once they are known) and how many iterations are required to get there. We also construct algorithms that allow our theoretical results to be implemented successfully in practice. PMID:24736594
The PX-EM algorithm for fast stable fitting of Henderson's mixed model
Foulley, Jean-Louis; Van Dyk, David A
2000-01-01
This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence) are obtained for PX-EM relative to the basic EM algorithm in the random regression. PMID:14736399
Long-pulse stability limits of the ITER baseline scenario
Jackson, G. L.; Luce, T. C.; Solomon, W. M.; ...
2015-01-14
DIII-D has made significant progress in developing the techniques required to operate ITER, and in understanding their impact on performance when integrated into operational scenarios at ITER relevant parameters. We demonstrated long duration plasmas, stable to m/n =2/1 tearing modes (TMs), with an ITER similar shape and I p/aB T, in DIII-D, that evolve to stationary conditions. The operating region most likely to reach stable conditions has normalized pressure, B N≈1.9–2.1 (compared to the ITER baseline design of 1.6 – 1.8), and a Greenwald normalized density fraction, f GW 0.42 – 0.70 (the ITER design is f GW ≈ 0.8).more » The evolution of the current profile, using internal inductance (l i) as an indicator, is found to produce a smaller fraction of stable pulses when l i is increased above ≈ 1.1 at the beginning of β N flattop. Stable discharges with co-neutral beam injection (NBI) are generally accompanied with a benign n=2 MHD mode. However if this mode exceeds ≈ 10 G, the onset of a m/n=2/1 tearing mode occurs with a loss of confinement. In addition, stable operation with low applied external torque, at or below the extrapolated value expected for ITER has also been demonstrated. With electron cyclotron (EC) injection, the operating region of stable discharges has been further extended at ITER equivalent levels of torque and to ELM free discharges at higher torque but with the addition of an n=3 magnetic perturbation from the DIII-D internal coil set. Lastly, the characterization of the ITER baseline scenario evolution for long pulse duration, extension to more ITER relevant values of torque and electron heating, and suppression of ELMs have significantly advanced the physics basis of this scenario, although significant effort remains in the simultaneous integration of all these requirements.« less
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
Conservative tightly-coupled simulations of stochastic multiscale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taverniers, Søren; Pigarov, Alexander Y.; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu
2016-05-15
Multiphysics problems often involve components whose macroscopic dynamics is driven by microscopic random fluctuations. The fidelity of simulations of such systems depends on their ability to propagate these random fluctuations throughout a computational domain, including subdomains represented by deterministic solvers. When the constituent processes take place in nonoverlapping subdomains, system behavior can be modeled via a domain-decomposition approach that couples separate components at the interfaces between these subdomains. Its coupling algorithm has to maintain a stable and efficient numerical time integration even at high noise strength. We propose a conservative domain-decomposition algorithm in which tight coupling is achieved by employingmore » either Picard's or Newton's iterative method. Coupled diffusion equations, one of which has a Gaussian white-noise source term, provide a computational testbed for analysis of these two coupling strategies. Fully-converged (“implicit”) coupling with Newton's method typically outperforms its Picard counterpart, especially at high noise levels. This is because the number of Newton iterations scales linearly with the amplitude of the Gaussian noise, while the number of Picard iterations can scale superlinearly. At large time intervals between two subsequent inter-solver communications, the solution error for single-iteration (“explicit”) Picard's coupling can be several orders of magnitude higher than that for implicit coupling. Increasing the explicit coupling's communication frequency reduces this difference, but the resulting increase in computational cost can make it less efficient than implicit coupling at similar levels of solution error, depending on the communication frequency of the latter and the noise strength. This trend carries over into higher dimensions, although at high noise strength explicit coupling may be the only computationally viable option.« less
NASA Astrophysics Data System (ADS)
Li, Zhifu; Hu, Yueming; Li, Di
2016-08-01
For a class of linear discrete-time uncertain systems, a feedback feed-forward iterative learning control (ILC) scheme is proposed, which is comprised of an iterative learning controller and two current iteration feedback controllers. The iterative learning controller is used to improve the performance along the iteration direction and the feedback controllers are used to improve the performance along the time direction. First of all, the uncertain feedback feed-forward ILC system is presented by an uncertain two-dimensional Roesser model system. Then, two robust control schemes are proposed. One can ensure that the feedback feed-forward ILC system is bounded-input bounded-output stable along time direction, and the other can ensure that the feedback feed-forward ILC system is asymptotically stable along time direction. Both schemes can guarantee the system is robust monotonically convergent along the iteration direction. Third, the robust convergent sufficient conditions are given, which contains a linear matrix inequality (LMI). Moreover, the LMI can be used to determine the gain matrix of the feedback feed-forward iterative learning controller. Finally, the simulation results are presented to demonstrate the effectiveness of the proposed schemes.
Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations
NASA Astrophysics Data System (ADS)
Mirloo, Mahsa; Ebrahimnezhad, Hosein
2018-03-01
In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.
Efficient image projection by Fourier electroholography.
Makowski, Michał; Ducin, Izabela; Kakarenko, Karol; Kolodziejczyk, Andrzej; Siemion, Agnieszka; Siemion, Andrzej; Suszek, Jaroslaw; Sypek, Maciej; Wojnowski, Dariusz
2011-08-15
An improved efficient projection of color images is presented. It uses a phase spatial light modulator with three iteratively optimized Fourier holograms displayed simultaneously--each for one primary color. This spatial division instead of time division provides stable images. A pixelated structure of the modulator and fluctuations of liquid crystal molecules cause a zeroth-order peak, eliminated by additional wavelength-dependent phase factors shifting it before the image plane, where it is blocked with a matched filter. Speckles are suppressed by time integration of variable speckle patterns generated by additional randomizations of an initial phase and minor changes of the signal. © 2011 Optical Society of America
Shading correction assisted iterative cone-beam CT reconstruction
NASA Astrophysics Data System (ADS)
Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye
2017-11-01
Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.
Inductive flux usage and its optimization in tokamak operation
Luce, Timothy C.; Humphreys, David A.; Jackson, Gary L.; ...
2014-07-30
The energy flow from the poloidal field coils of a tokamak to the electromagnetic and kinetic stored energy of the plasma are considered in the context of optimizing the operation of ITER. The goal is to optimize the flux usage in order to allow the longest possible burn in ITER at the desired conditions to meet the physics objectives (500 MW fusion power with energy gain of 10). A mathematical formulation of the energy flow is derived and applied to experiments in the DIII-D tokamak that simulate the ITER design shape and relevant normalized current and pressure. The rate ofmore » rise of the plasma current was varied, and the fastest stable current rise is found to be the optimum for flux usage in DIII-D. A method to project the results to ITER is formulated. The constraints of the ITER poloidal field coil set yield an optimum at ramp rates slower than the maximum stable rate for plasmas similar to the DIII-D plasmas. Finally, experiments in present-day tokamaks for further optimization of the current rise and validation of the projections are suggested.« less
Baumes, Laurent A
2006-01-01
One of the main problems in high-throughput research for materials is still the design of experiments. At early stages of discovery programs, purely exploratory methodologies coupled with fast screening tools should be employed. This should lead to opportunities to find unexpected catalytic results and identify the "groups" of catalyst outputs, providing well-defined boundaries for future optimizations. However, very few new papers deal with strategies that guide exploratory studies. Mostly, traditional designs, homogeneous covering, or simple random samplings are exploited. Typical catalytic output distributions exhibit unbalanced datasets for which an efficient learning is hardly carried out, and interesting but rare classes are usually unrecognized. Here is suggested a new iterative algorithm for the characterization of the search space structure, working independently of learning processes. It enhances recognition rates by transferring catalysts to be screened from "performance-stable" space zones to "unsteady" ones which necessitate more experiments to be well-modeled. The evaluation of new algorithm attempts through benchmarks is compulsory due to the lack of past proofs about their efficiency. The method is detailed and thoroughly tested with mathematical functions exhibiting different levels of complexity. The strategy is not only empirically evaluated, the effect or efficiency of sampling on future Machine Learning performances is also quantified. The minimum sample size required by the algorithm for being statistically discriminated from simple random sampling is investigated.
NASA Astrophysics Data System (ADS)
O'Malley, D.; Le, E. B.; Vesselinov, V. V.
2015-12-01
We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
Increasing reconstruction quality of diffractive optical elements displayed with LC SLM
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Sergey N.
2015-03-01
Phase liquid crystal (LC) spatial light modulators (SLM) are actively used in various applications. However, majority of scientific applications require stable phase modulation which might be hard to achieve with commercially available SLM due to its consumer origin. The use of digital voltage addressing scheme leads to phase temporal fluctuations, which results in lower diffraction efficiency and reconstruction quality of displayed diffractive optical elements (DOE). Due to high periodicity of fluctuations it should be possible to use knowledge of these fluctuations during DOE synthesis to minimize negative effect. We synthesized DOE using accurately measured phase fluctuations of phase LC SLM "HoloEye PLUTO VIS" to minimize its negative impact on displayed DOE reconstruction. Synthesis was conducted with versatile direct search with random trajectory (DSRT) method in the following way. Before DOE synthesis begun, two-dimensional dependency of SLM phase shift on addressed signal level and time from frame start was obtained. Then synthesis begins. First, initial phase distribution is created. Second, random trajectory of consecutive processing of all DOE elements is generated. Then iterative process begins. Each DOE element sequentially has its value changed to one that provides better value of objective criterion, e.g. lower deviation of reconstructed image from original one. If current element value provides best objective criterion value then it left unchanged. After all elements are processed, iteration repeats until stagnation is reached. It is demonstrated that application of SLM phase fluctuations knowledge in DOE synthesis with DSRT method leads to noticeable increase of DOE reconstruction quality.
Solving large test-day models by iteration on data and preconditioned conjugate gradient.
Lidauer, M; Strandén, I; Mäntysaari, E A; Pösö, J; Kettunen, A
1999-12-01
A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied. An algorithm was used as a reference in which one fixed effect was solved by Gauss-Seidel method, and other effects were solved by a second-order Jacobi method. Implementation of the preconditioned conjugate gradient required storing four vectors (size equal to number of unknowns in the mixed model equations) in random access memory and reading the data at each round of iteration. The preconditioner comprised diagonal blocks of the coefficient matrix. Comparison of algorithms was based on solutions of mixed model equations obtained by a single-trait animal model and a single-trait, random regression test-day model. Data sets for both models used milk yield records of primiparous Finnish dairy cows. Animal model data comprised 665,629 lactation milk yields and random regression test-day model data of 6,732,765 test-day milk yields. Both models included pedigree information of 1,099,622 animals. The animal model ¿random regression test-day model¿ required 122 ¿305¿ rounds of iteration to converge with the reference algorithm, but only 88 ¿149¿ were required with the preconditioned conjugate gradient. To solve the random regression test-day model with the preconditioned conjugate gradient required 237 megabytes of random access memory and took 14% of the computation time needed by the reference algorithm.
Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images
Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.
2012-01-01
SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773
NASA Astrophysics Data System (ADS)
Wingen, A.; Wilcox, R. S.; Seal, S. K.; Unterberg, E. A.; Cianciosa, M. R.; Delgado-Aparicio, L. F.; Hirshman, S. P.; Lao, L. L.
2018-03-01
Large, spontaneous m/n = 1/1 helical cores are shown to be expected in tokamaks such as ITER with extended regions of low- or reversed- magnetic shear profiles and q near 1 in the core. The threshold for this spontaneous symmetry breaking is determined using VMEC scans, beginning with reconstructed 3D equilibria from DIII-D and Alcator C-Mod based on observed internal 3D deformations. The helical core is a saturated internal kink mode (Wesson 1986 Plasma Phys. Control. Fusion 28 243); its onset threshold is shown to be proportional to (dp/dρ)/B_t2 around q = 1. Below the threshold, applied 3D fields can drive a helical core to finite size, as in DIII-D. The helical core size thereby depends on the magnitude of the applied perturbation. Above it, a small, random 3D kick causes a bifurcation from axisymmetry and excites a spontaneous helical core, which is independent of the kick size. Systematic scans of the q-profile show that the onset threshold is very sensitive to the q-shear in the core. Helical cores occur frequently in Alcator C-Mod during ramp-up when slow current penetration results in a reversed shear q-profile, which is favorable for helical core formation. Finally, a comparison of the helical core onset threshold for discharges from DIII-D, Alcator C-Mod and ITER confirms that while DIII-D is marginally stable, Alcator C-Mod and especially ITER are highly susceptible to helical core formation without being driven by an externally applied 3D magnetic field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wingen, A.; Wilcox, R. S.; Seal, S. K.
In this paper, large, spontaneous m/n = 1/1 helical cores are shown to be expected in tokamaks such as ITER with extended regions of low- or reversed- magnetic shear profiles and q near 1 in the core. The threshold for this spontaneous symmetry breaking is determined using VMEC scans, beginning with reconstructed 3D equilibria from DIII-D and Alcator C-Mod based on observed internal 3D deformations. The helical core is a saturated internal kink mode (Wesson 1986 Plasma Phys. Control. Fusion 28 243); its onset threshold is shown to be proportional tomore » $$({\\rm d}p/{\\rm d}\\rho)/B_t^2$$ around q = 1. Below the threshold, applied 3D fields can drive a helical core to finite size, as in DIII-D. The helical core size thereby depends on the magnitude of the applied perturbation. Above it, a small, random 3D kick causes a bifurcation from axisymmetry and excites a spontaneous helical core, which is independent of the kick size. Systematic scans of the q-profile show that the onset threshold is very sensitive to the q-shear in the core. Helical cores occur frequently in Alcator C-Mod during ramp-up when slow current penetration results in a reversed shear q-profile, which is favorable for helical core formation. In conclusion, a comparison of the helical core onset threshold for discharges from DIII-D, Alcator C-Mod and ITER confirms that while DIII-D is marginally stable, Alcator C-Mod and especially ITER are highly susceptible to helical core formation without being driven by an externally applied 3D magnetic field.« less
Wingen, A.; Wilcox, R. S.; Seal, S. K.; ...
2018-01-15
In this paper, large, spontaneous m/n = 1/1 helical cores are shown to be expected in tokamaks such as ITER with extended regions of low- or reversed- magnetic shear profiles and q near 1 in the core. The threshold for this spontaneous symmetry breaking is determined using VMEC scans, beginning with reconstructed 3D equilibria from DIII-D and Alcator C-Mod based on observed internal 3D deformations. The helical core is a saturated internal kink mode (Wesson 1986 Plasma Phys. Control. Fusion 28 243); its onset threshold is shown to be proportional tomore » $$({\\rm d}p/{\\rm d}\\rho)/B_t^2$$ around q = 1. Below the threshold, applied 3D fields can drive a helical core to finite size, as in DIII-D. The helical core size thereby depends on the magnitude of the applied perturbation. Above it, a small, random 3D kick causes a bifurcation from axisymmetry and excites a spontaneous helical core, which is independent of the kick size. Systematic scans of the q-profile show that the onset threshold is very sensitive to the q-shear in the core. Helical cores occur frequently in Alcator C-Mod during ramp-up when slow current penetration results in a reversed shear q-profile, which is favorable for helical core formation. In conclusion, a comparison of the helical core onset threshold for discharges from DIII-D, Alcator C-Mod and ITER confirms that while DIII-D is marginally stable, Alcator C-Mod and especially ITER are highly susceptible to helical core formation without being driven by an externally applied 3D magnetic field.« less
The application of mean field theory to image motion estimation.
Zhang, J; Hanauer, G G
1995-01-01
Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.
Shaping asteroid models using genetic evolution (SAGE)
NASA Astrophysics Data System (ADS)
Bartczak, P.; Dudziński, G.
2018-02-01
In this work, we present SAGE (shaping asteroid models using genetic evolution), an asteroid modelling algorithm based solely on photometric lightcurve data. It produces non-convex shapes, orientations of the rotation axes and rotational periods of asteroids. The main concept behind a genetic evolution algorithm is to produce random populations of shapes and spin-axis orientations by mutating a seed shape and iterating the process until it converges to a stable global minimum. We tested SAGE on five artificial shapes. We also modelled asteroids 433 Eros and 9 Metis, since ground truth observations for them exist, allowing us to validate the models. We compared the derived shape of Eros with the NEAR Shoemaker model and that of Metis with adaptive optics and stellar occultation observations since other models from various inversion methods were available for Metis.
Fast and secure encryption-decryption method based on chaotic dynamics
Protopopescu, Vladimir A.; Santoro, Robert T.; Tolliver, Johnny S.
1995-01-01
A method and system for the secure encryption of information. The method comprises the steps of dividing a message of length L into its character components; generating m chaotic iterates from m independent chaotic maps; producing an "initial" value based upon the m chaotic iterates; transforming the "initial" value to create a pseudo-random integer; repeating the steps of generating, producing and transforming until a pseudo-random integer sequence of length L is created; and encrypting the message as ciphertext based upon the pseudo random integer sequence. A system for accomplishing the invention is also provided.
Drawing dynamical and parameters planes of iterative families and methods.
Chicharro, Francisco I; Cordero, Alicia; Torregrosa, Juan R
2013-01-01
The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones).
Wang, G.L.; Chew, W.C.; Cui, T.J.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.
2004-01-01
Three-dimensional (3D) subsurface imaging by using inversion of data obtained from the very early time electromagnetic system (VETEM) was discussed. The study was carried out by using the distorted Born iterative method to match the internal nonlinear property of the 3D inversion problem. The forward solver was based on the total-current formulation bi-conjugate gradient-fast Fourier transform (BCCG-FFT). It was found that the selection of regularization parameter follow a heuristic rule as used in the Levenberg-Marquardt algorithm so that the iteration is stable.
Ultrametric properties of the attractor spaces for random iterated linear function systems
NASA Astrophysics Data System (ADS)
Buchovets, A. G.; Moskalev, P. V.
2018-03-01
We investigate attractors of random iterated linear function systems as independent spaces embedded in the ordinary Euclidean space. The introduction on the set of attractor points of a metric that satisfies the strengthened triangle inequality makes this space ultrametric. Then inherent in ultrametric spaces the properties of disconnectedness and hierarchical self-similarity make it possible to define an attractor as a fractal. We note that a rigorous proof of these properties in the case of an ordinary Euclidean space is very difficult.
Drawing Dynamical and Parameters Planes of Iterative Families and Methods
Chicharro, Francisco I.
2013-01-01
The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones). PMID:24376386
Iteration and superposition encryption scheme for image sequences based on multi-dimensional keys
NASA Astrophysics Data System (ADS)
Han, Chao; Shen, Yuzhen; Ma, Wenlin
2017-12-01
An iteration and superposition encryption scheme for image sequences based on multi-dimensional keys is proposed for high security, big capacity and low noise information transmission. Multiple images to be encrypted are transformed into phase-only images with the iterative algorithm and then are encrypted by different random phase, respectively. The encrypted phase-only images are performed by inverse Fourier transform, respectively, thus new object functions are generated. The new functions are located in different blocks and padded zero for a sparse distribution, then they propagate to a specific region at different distances by angular spectrum diffraction, respectively and are superposed in order to form a single image. The single image is multiplied with a random phase in the frequency domain and then the phase part of the frequency spectrums is truncated and the amplitude information is reserved. The random phase, propagation distances, truncated phase information in frequency domain are employed as multiple dimensional keys. The iteration processing and sparse distribution greatly reduce the crosstalk among the multiple encryption images. The superposition of image sequences greatly improves the capacity of encrypted information. Several numerical experiments based on a designed optical system demonstrate that the proposed scheme can enhance encrypted information capacity and make image transmission at a highly desired security level.
Analytic approximation for random muffin-tin alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, R.; Gray, L.J.; Kaplan, T.
1983-03-15
The methods introduced in a previous paper under the name of ''traveling-cluster approximation'' (TCA) are applied, in a multiple-scattering approach, to the case of a random muffin-tin substitutional alloy. This permits the iterative part of a self-consistent calculation to be carried out entirely in terms of on-the-energy-shell scattering amplitudes. Off-shell components of the mean resolvent, needed for the calculation of spectral functions, are obtained by standard methods involving single-site scattering wave functions. The single-site TCA is just the usual coherent-potential approximation, expressed in a form particularly suited for iteration. A fixed-point theorem is proved for the general t-matrix TCA, ensuringmore » convergence upon iteration to a unique self-consistent solution with the physically essential Herglotz properties.« less
Graphic matching based on shape contexts and reweighted random walks
NASA Astrophysics Data System (ADS)
Zhang, Mingxuan; Niu, Dongmei; Zhao, Xiuyang; Liu, Mingjun
2018-04-01
Graphic matching is a very critical issue in all aspects of computer vision. In this paper, a new graphics matching algorithm combining shape contexts and reweighted random walks was proposed. On the basis of the local descriptor, shape contexts, the reweighted random walks algorithm was modified to possess stronger robustness and correctness in the final result. Our main process is to use the descriptor of the shape contexts for the random walk on the iteration, of which purpose is to control the random walk probability matrix. We calculate bias matrix by using descriptors and then in the iteration we use it to enhance random walks' and random jumps' accuracy, finally we get the one-to-one registration result by discretization of the matrix. The algorithm not only preserves the noise robustness of reweighted random walks but also possesses the rotation, translation, scale invariance of shape contexts. Through extensive experiments, based on real images and random synthetic point sets, and comparisons with other algorithms, it is confirmed that this new method can produce excellent results in graphic matching.
Forward marching procedure for separated boundary-layer flows
NASA Technical Reports Server (NTRS)
Carter, J. E.; Wornom, S. F.
1975-01-01
A forward-marching procedure for separated boundary-layer flows which permits the rapid and accurate solution of flows of limited extent is presented. The streamwise convection of vorticity in the reversed flow region is neglected, and this approximation is incorporated into a previously developed (Carter, 1974) inverse boundary-layer procedure. The equations are solved by the Crank-Nicolson finite-difference scheme in which column iteration is carried out at each streamwise station. Instabilities encountered in the column iterations are removed by introducing timelike terms in the finite-difference equations. This provides both unconditional diagonal dominance and a column iterative scheme, found to be stable using the von Neumann stability analysis.
Nonlinear random response prediction using MSC/NASTRAN
NASA Technical Reports Server (NTRS)
Robinson, J. H.; Chiang, C. K.; Rizzi, S. A.
1993-01-01
An equivalent linearization technique was incorporated into MSC/NASTRAN to predict the nonlinear random response of structures by means of Direct Matrix Abstract Programming (DMAP) modifications and inclusion of the nonlinear differential stiffness module inside the iteration loop. An iterative process was used to determine the rms displacements. Numerical results obtained for validation on simple plates and beams are in good agreement with existing solutions in both the linear and linearized regions. The versatility of the implementation will enable the analyst to determine the nonlinear random responses for complex structures under combined loads. The thermo-acoustic response of a hexagonal thermal protection system panel is used to highlight some of the features of the program.
NASA Astrophysics Data System (ADS)
Calini, A.; Schober, C. M.
2013-09-01
In this article we present the results of a broad numerical investigation on the stability of breather-type solutions of the nonlinear Schrödinger (NLS) equation, specifically the one- and two-mode breathers for an unstable plane wave, which are frequently used to model rogue waves. The numerical experiments involve large ensembles of perturbed initial data for six typical random perturbations. Ensemble estimates of the "closeness", A(t), of the perturbed solution to an element of the respective unperturbed family indicate that the only neutrally stable breathers are the ones of maximal dimension, that is: given an unstable background with N unstable modes, the only neutrally stable breathers are the N-dimensional ones (obtained as a superimposition of N simple breathers via iterated Backlund transformations). Conversely, breathers which are not fully saturated are sensitive to noisy environments and are unstable. Interestingly, A(t) is smallest for the coalesced two-mode breather indicating the coalesced case may be the most robust two-mode breather in a laboratory setting. The numerical simulations confirm and provide a realistic realization of the stability behavior established analytically by the authors.
Li, Haichen; Yaron, David J
2016-11-08
A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.
Wan, Tao; Madabhushi, Anant; Phinikaridou, Alkystis; Hamilton, James A.; Hua, Ning; Pham, Tuan; Danagoulian, Jovanna; Kleiman, Ross; Buckler, Andrew J.
2014-01-01
Purpose: To develop a new spatio-temporal texture (SpTeT) based method for distinguishing vulnerable versus stable atherosclerotic plaques on DCE-MRI using a rabbit model of atherothrombosis. Methods: Aortic atherosclerosis was induced in 20 New Zealand White rabbits by cholesterol diet and endothelial denudation. MRI was performed before (pretrigger) and after (posttrigger) inducing plaque disruption with Russell's-viper-venom and histamine. Of the 30 vascular targets (segments) under histology analysis, 16 contained thrombus (vulnerable) and 14 did not (stable). A total of 352 voxel-wise computerized SpTeT features, including 192 Gabor, 36 Kirsch, 12 Sobel, 52 Haralick, and 60 first-order textural features, were extracted on DCE-MRI to capture subtle texture changes in the plaques over the course of contrast uptake. Different combinations of SpTeT feature sets, in which the features were ranked by a minimum-redundancy-maximum-relevance feature selection technique, were evaluated via a random forest classifier. A 500 iterative 2-fold cross validation was performed for discriminating the vulnerable atherosclerotic plaque and stable atherosclerotic plaque on per voxel basis. Four quantitative metrics were utilized to measure the classification results in separating between vulnerable and stable plaques. Results: The quantitative results show that the combination of five classes of SpTeT features can distinguish between vulnerable (disrupted plaques with an overlying thrombus) and stable plaques with the best AUC values of 0.9631 ± 0.0088, accuracy of 89.98% ± 0.57%, sensitivity of 83.71% ± 1.71%, and specificity of 94.55% ± 0.48%. Conclusions: Vulnerable and stable plaque can be distinguished by SpTeT based features. The SpTeT features, following validation on larger datasets, could be established as effective and reliable imaging biomarkers for noninvasively assessing atherosclerotic risk. PMID:24694153
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
Mode switching in volcanic seismicity: El Hierro 2011-2013
NASA Astrophysics Data System (ADS)
Roberts, Nick S.; Bell, Andrew F.; Main, Ian G.
2016-05-01
The Gutenberg-Richter b value is commonly used in volcanic eruption forecasting to infer material or mechanical properties from earthquake distributions. Such studies typically analyze discrete time windows or phases, but the choice of such windows is subjective and can introduce significant bias. Here we minimize this sample bias by iteratively sampling catalogs with randomly chosen windows and then stack the resulting probability density functions for the estimated b>˜ value to determine a net probability density function. We examine data from the El Hierro seismic catalog during a period of unrest in 2011-2013 and demonstrate clear multimodal behavior. Individual modes are relatively stable in time, but the most probable b>˜ value intermittently switches between modes, one of which is similar to that of tectonic seismicity. Multimodality is primarily associated with intermittent activation and cessation of activity in different parts of the volcanic system rather than with respect to any systematic inferred underlying process.
Enhancing sparsity of Hermite polynomial expansions by iterative rotations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiu; Lei, Huan; Baker, Nathan A.
2016-02-01
Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.
Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy.
Zelyak, O; Fallone, B G; St-Aubin, J
2017-12-14
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.
Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy
NASA Astrophysics Data System (ADS)
Zelyak, O.; Fallone, B. G.; St-Aubin, J.
2018-01-01
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.
Corrigendum to "Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy".
Zelyak, Oleksandr; Fallone, B Gino; St-Aubin, Joel
2018-03-12
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation. © 2018 Institute of Physics and Engineering in Medicine.
A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters
Wang, Zhihao; Yi, Jing
2016-01-01
For the shortcoming of fuzzy c-means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule n and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result. PMID:28042291
A non-iterative extension of the multivariate random effects meta-analysis.
Makambi, Kepher H; Seung, Hyunuk
2015-01-01
Multivariate methods in meta-analysis are becoming popular and more accepted in biomedical research despite computational issues in some of the techniques. A number of approaches, both iterative and non-iterative, have been proposed including the multivariate DerSimonian and Laird method by Jackson et al. (2010), which is non-iterative. In this study, we propose an extension of the method by Hartung and Makambi (2002) and Makambi (2001) to multivariate situations. A comparison of the bias and mean square error from a simulation study indicates that, in some circumstances, the proposed approach perform better than the multivariate DerSimonian-Laird approach. An example is presented to demonstrate the application of the proposed approach.
Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.
Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration
Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng
2012-01-01
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969
ITER Baseline Scenario with ECCD Applied to Neoclassical Tearing Modes in DIII-D
NASA Astrophysics Data System (ADS)
Welander, A. G.; La Haye, R. J.; Lohr, J. M.; Humphreys, D. A.; Prater, R.; Paz-Soldan, C.; Kolemen, E.; Turco, F.; Olofsson, E.
2015-11-01
The neoclassical tearing mode (NTM) is a magnetic island that can occur on flux surfaces where the safety factor q is a rational number. Both m/n=3/2 and 2/1 NTM's degrade confinement, and the 2/1 mode often locks to the wall and disrupts the plasma. An NTM can be suppressed by depositing electron cyclotron current drive (ECCD) on the q-surface by injecting microwave beams into the plasma from gyrotrons. Recent DIII-D experiments have studied the application of ECCD/ECRH in the ITER Baseline Scenario. The power required from the gyrotrons can be significant enough to impact the fusion gain, Q in ITER. However, if gyrotron power could be minimized or turned off in ITER when not needed, this impact would be small. In fact, tearing-stable operation at low torque has been achieved previously in DIII-D without EC power. A vision for NTM control in ITER will be described together with results obtained from simulations and experiments in DIII-D under ITER like conditions. Work supported by the US DOE under DE-FC02-04ER54698, DE-AC02-09CH11466, DE-FG02-04ER54761.
Irradiation tests of ITER candidate Hall sensors using two types of neutron spectra.
Ďuran, I; Bolshakova, I; Viererbl, L; Sentkerestiová, J; Holyaka, R; Lahodová, Z; Bém, P
2010-10-01
We report on irradiation tests of InSb based Hall sensors at two irradiation facilities with two distinct types of neutron spectra. One was a fission reactor neutron spectrum with a significant presence of thermal neutrons, while another one was purely fast neutron field. Total neutron fluence of the order of 10(16) cm(-2) was accumulated in both cases, leading to significant drop of Hall sensor sensitivity in case of fission reactor spectrum, while stable performance was observed at purely fast neutron spectrum. This finding suggests that performance of this particular type of Hall sensors is governed dominantly by transmutation. Additionally, it further stresses the need to test ITER candidate Hall sensors under neutron flux with ITER relevant spectrum.
Improvements in surface singularity analysis and design methods. [applicable to airfoils
NASA Technical Reports Server (NTRS)
Bristow, D. R.
1979-01-01
The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.
NASA Astrophysics Data System (ADS)
Shirzaei, M.; Walter, T. R.
2009-10-01
Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.
Iterative dip-steering median filter
NASA Astrophysics Data System (ADS)
Huo, Shoudong; Zhu, Weihong; Shi, Taikun
2017-09-01
Seismic data are always contaminated with high noise components, which present processing challenges especially for signal preservation and its true amplitude response. This paper deals with an extension of the conventional median filter, which is widely used in random noise attenuation. It is known that the standard median filter works well with laterally aligned coherent events but cannot handle steep events, especially events with conflicting dips. In this paper, an iterative dip-steering median filter is proposed for the attenuation of random noise in the presence of multiple dips. The filter first identifies the dominant dips inside an optimized processing window by a Fourier-radial transform in the frequency-wavenumber domain. The optimum size of the processing window depends on the intensity of random noise that needs to be attenuated and the amount of signal to be preserved. It then applies median filter along the dominant dip and retains the signals. Iterations are adopted to process the residual signals along the remaining dominant dips in a descending sequence, until all signals have been retained. The method is tested by both synthetic and field data gathers and also compared with the commonly used f-k least squares de-noising and f-x deconvolution.
Stability of Mixed-Strategy-Based Iterative Logit Quantal Response Dynamics in Game Theory
Zhuang, Qian; Di, Zengru; Wu, Jinshan
2014-01-01
Using the Logit quantal response form as the response function in each step, the original definition of static quantal response equilibrium (QRE) is extended into an iterative evolution process. QREs remain as the fixed points of the dynamic process. However, depending on whether such fixed points are the long-term solutions of the dynamic process, they can be classified into stable (SQREs) and unstable (USQREs) equilibriums. This extension resembles the extension from static Nash equilibriums (NEs) to evolutionary stable solutions in the framework of evolutionary game theory. The relation between SQREs and other solution concepts of games, including NEs and QREs, is discussed. Using experimental data from other published papers, we perform a preliminary comparison between SQREs, NEs, QREs and the observed behavioral outcomes of those experiments. For certain games, we determine that SQREs have better predictive power than QREs and NEs. PMID:25157502
SULTAN measurement and qualification: ITER-US-LLNL-NMARTOVETSKY- 092008
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martovetsky, N N
2006-09-21
Measuring the characteristics of full scale ITER CICC at SULTAN is the critical qualification test. If volt-ampere characteristic (VAC) or volt-temperature characteristic (VTC) are distorted, the criterion of 10 uV/m may not be a valid criterion to judge the conductor performance. Only measurements with a clear absence or low signals from the current distribution should be considered as quantitatively representative, although in some obvious circumstances one can judge if a conductor will meet or fail ITER requirements. SULTAN full scale ITER CICC testing should be done with all measures taken to ensure uniform current redistribution. A full removal of Crmore » plating in the joint area and complete solder filling of the joints (with provision of the central channel for helium flow) should be mandatory for DC qualification samples for ITER. Also, T and I should be increased slowly that an equilibrium could be established for accurate measurement of Tcs, Ic and N. It is also desirable to go up in down in current and/or temperature (within stable range) to make sure that the equilibrium is reached.« less
Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu
2016-01-01
False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793
NASA Astrophysics Data System (ADS)
Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.
2017-10-01
This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution
Randomly chosen chaotic maps can give rise to nearly ordered behavior
NASA Astrophysics Data System (ADS)
Boyarsky, Abraham; Góra, Paweł; Islam, Md. Shafiqul
2005-10-01
Parrondo’s paradox [J.M.R. Parrondo, G.P. Harmer, D. Abbott, New paradoxical games based on Brownian ratchets, Phys. Rev. Lett. 85 (2000), 5226-5229] (see also [O.E. Percus, J.K. Percus, Can two wrongs make a right? Coin-tossing games and Parrondo’s paradox, Math. Intelligencer 24 (3) (2002) 68-72]) states that two losing gambling games when combined one after the other (either deterministically or randomly) can result in a winning game: that is, a losing game followed by a losing game = a winning game. Inspired by this paradox, a recent study [J. Almeida, D. Peralta-Salas, M. Romera, Can two chaotic systems give rise to order? Physica D 200 (2005) 124-132] asked an analogous question in discrete time dynamical system: can two chaotic systems give rise to order, namely can they be combined into another dynamical system which does not behave chaotically? Numerical evidence is provided in [J. Almeida, D. Peralta-Salas, M. Romera, Can two chaotic systems give rise to order? Physica D 200 (2005) 124-132] that two chaotic quadratic maps, when composed with each other, create a new dynamical system which has a stable period orbit. The question of what happens in the case of random composition of maps is posed in [J. Almeida, D. Peralta-Salas, M. Romera, Can two chaotic systems give rise to order? Physica D 200 (2005) 124-132] but left unanswered. In this note we present an example of a dynamical system where, at each iteration, a map is chosen in a probabilistic manner from a collection of chaotic maps. The resulting random map is proved to have an infinite absolutely continuous invariant measure (acim) with spikes at two points. From this we show that the dynamics behaves in a nearly ordered manner. When the foregoing maps are applied one after the other, deterministically as in [O.E. Percus, J.K. Percus, Can two wrongs make a right? Coin-tossing games and Parrondo’s paradox, Math. Intelligencer 24 (3) (2002) 68-72], the resulting composed map has a periodic orbit which is stable.
NASA Astrophysics Data System (ADS)
Cheng, X. Y.; Wang, H. B.; Jia, Y. L.; Dong, YH
2018-05-01
In this paper, an open-closed-loop iterative learning control (ILC) algorithm is constructed for a class of nonlinear systems subjecting to random data dropouts. The ILC algorithm is implemented by a networked control system (NCS), where only the off-line data is transmitted by network while the real-time data is delivered in the point-to-point way. Thus, there are two controllers rather than one in the control system, which makes better use of the saved and current information and thereby improves the performance achieved by open-loop control alone. During the transfer of off-line data between the nonlinear plant and the remote controller data dropout occurs randomly and the data dropout rate is modeled as a binary Bernoulli random variable. Both measurement and control data dropouts are taken into consideration simultaneously. The convergence criterion is derived based on rigorous analysis. Finally, the simulation results verify the effectiveness of the proposed method.
Continuous analog of multiplicative algebraic reconstruction technique for computed tomography
NASA Astrophysics Data System (ADS)
Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya
2016-03-01
We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
Complex symmetric matrices with strongly stable iterates
NASA Technical Reports Server (NTRS)
Tadmor, E.
1985-01-01
Complex-valued symmetric matrices are studied. A simple expression for the spectral norm of such matrices is obtained, by utilizing a unitarily congruent invariant form. A sharp criterion is provided for identifying those symmetric matrices whose spectral norm is not exceeding one: such strongly stable matrices are usually sought in connection with convergent difference approximations to partial differential equations. As an example, the derived criterion is applied to conclude the strong stability of a Lax-Wendroff scheme.
Natural selection of memory-one strategies for the iterated prisoner's dilemma.
Kraines, D P; Kraines, V Y
2000-04-21
In the iterated Prisoner's Dilemma, mutually cooperative behavior can become established through Darwinian natural selection. In simulated interactions of stochastic memory-one strategies for the Iterated Prisoner's Dilemma, Nowak and Sigmund discovered that cooperative agents using a Pavlov (Win-Stay Lose-Switch) type strategy eventually dominate a random population. This emergence follows more directly from a deterministic dynamical system based on differential reproductive success or natural selection. When restricted to an environment of memory-one agents interacting in iterated Prisoner's Dilemma games with a 1% noise level, the Pavlov agent is the only cooperative strategy and one of very few others that cannot be invaded by a similar strategy. Pavlov agents are trusting but no suckers. They will exploit weakness but repent if punished for cheating. Copyright 2000 Academic Press.
Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David
2013-01-01
Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.
A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu
2017-02-01
Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement ofmore » path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.« less
Non-iterative determination of the stress-density relation from ramp wave data through a window
NASA Astrophysics Data System (ADS)
Dowling, Evan; Fratanduono, Dayne; Swift, Damian
2017-06-01
In the canonical ramp compression experiment, a smoothly-increasing load is applied the surface of the sample, and the particle velocity history is measured at interfaces two or more different distances into the sample. The velocity histories are used to deduce a stress-density relation by correcting for perturbations caused by reflected release waves, usually via the iterative Lagrangian analysis technique of Rothman and Maw. We previously described a non-iterative (recursive) method of analysis, which was more stable and orders of magnitude faster than iteration, but was subject to the limitation that the free surface velocity had to be sampled at uniform intervals. We have now developed more general recursive algorithms suitable for analyzing ramp data through a finite-impedance window. Free surfaces can be treated seamlessly, and the need for uniform velocity sampling has been removed. These calculations require interpolation of partially-released states using the partially-constructed isentrope, making them slower than the previous free-surface scheme, but they are still much faster than iterative analysis. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing
2004-12-01
The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.
Alam, M S; Bognar, J G; Cain, S; Yasuda, B J
1998-03-10
During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.
Modelling the Probability of Landslides Impacting Road Networks
NASA Astrophysics Data System (ADS)
Taylor, F. E.; Malamud, B. D.
2012-04-01
During a landslide triggering event, the threat of landslides blocking roads poses a risk to logistics, rescue efforts and communities dependant on those road networks. Here we present preliminary results of a stochastic model we have developed to evaluate the probability of landslides intersecting a simple road network during a landslide triggering event and apply simple network indices to measure the state of the road network in the affected region. A 4000 x 4000 cell array with a 5 m x 5 m resolution was used, with a pre-defined simple road network laid onto it, and landslides 'randomly' dropped onto it. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m2 This statistical distribution was chosen based on three substantially complete triggered landslide inventories recorded in existing literature. The number of landslide areas (NL) selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. NL = 400 landslide areas chosen randomly for each iteration, and was based on several existing triggered landslide event inventories. A simple road network was chosen, in a 'T' shape configuration, with one road 1 x 4000 cells (5 m x 20 km) in a 'T' formation with another road 1 x 2000 cells (5 m x 10 km). The landslide areas were then randomly 'dropped' over the road array and indices such as the location, size (ABL) and number of road blockages (NBL) recorded. This process was performed 500 times (iterations) in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event with 400 landslides over a 400 km2 region, the number of road blocks per iteration, NBL,ranges from 0 to 7. The average blockage area for the 500 iterations (A¯ BL) is about 3000 m2, which closely matches the value of A¯ L for the triggered landslide inventories. We further find that over the 500 iterations, the probability of a given number of road blocks occurring on any given iteration, p(NBL) as a function of NBL, follows reasonably well a three-parameter inverse gamma probability density distribution with an exponential rollover (i.e., the most frequent value) at NBL = 1.3. In this paper we have begun to calculate the probability of the number of landslides blocking roads during a triggering event, and have found that this follows an inverse-gamma distribution, which is similar to that found for the statistics of landslide areas resulting from triggers. As we progress to model more realistic road networks, this work will aid in both long-term and disaster management for road networks by allowing probabilistic assessment of road network potential damage during different magnitude landslide triggering event scenarios.
LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS*
Meng, Xiangrui; Saunders, Michael A.; Mahoney, Michael W.
2014-01-01
We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to minx∈ℝn ‖Ax − b‖2, where A ∈ ℝm × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster. PMID:25419094
Heating and current drive requirements towards steady state operation in ITER
NASA Astrophysics Data System (ADS)
Poli, F. M.; Bonoli, P. T.; Kessel, C. E.; Batchelor, D. B.; Gorelenkova, M.; Harvey, B.; Petrov, Y.
2014-02-01
Steady state scenarios envisaged for ITER aim at optimizing the bootstrap current, while maintaining sufficient confinement and stability to provide the necessary fusion yield. Non-inductive scenarios will need to operate with Internal Transport Barriers (ITBs) in order to reach adequate fusion gain at typical currents of 9 MA. However, the large pressure gradients associated with ITBs in regions of weak or negative magnetic shear can be conducive to ideal MHD instabilities, reducing the no-wall limit. The E × B flow shear from toroidal plasma rotation is expected to be low in ITER, with a major role in the ITB dynamics being played by magnetic geometry. Combinations of H/CD sources that maintain weakly reversed magnetic shear profiles throughout the discharge are the focus of this work. Time-dependent transport simulations indicate that, with a trade-off of the EC equatorial and upper launcher, the formation and sustainment of quasi-steady state ITBs could be demonstrated in ITER with the baseline heating configuration. However, with proper constraints from peeling-ballooning theory on the pedestal width and height, the fusion gain and the maximum non-inductive current are below the ITER target. Upgrades of the heating and current drive system in ITER, like the use of Lower Hybrid current drive, could overcome these limitations, sustaining higher non-inductive current and confinement, more expanded ITBs which are ideal MHD stable.
First Operation with the JET ITER-Like Wall
NASA Astrophysics Data System (ADS)
Neu, Rudolf
2012-10-01
To consolidate ITER design choices and prepare for its operation, JET has implemented ITER's plasma facing materials, namely Be at the main wall and W in the divertor. In addition, protection systems, diagnostics and the vertical stability control were upgraded and the heating capability of the neutral beams was increased to over 30 MW. First results confirm the expected benefits and the limitations of all metal plasma facing components (PFCs), but also yield understanding of operational issues directly relating to ITER. H-retention is lower by at least a factor of 10 in all operational scenarios compared to that with C PFCs. The lower C content (˜ factor 10) have led to much lower radiation during the plasma burn-through phase eliminating breakdown failures. Similarly, the intrinsic radiation observed during disruptions is very low, leading to high power loads and to a slow current quench. Massive gas injection using a D2/Ar mixture restores levels of radiation and vessel forces similar to those of mitigated disruptions with the C wall. Dedicated L-H transition experiments indicate a reduced power threshold by 30%, a distinct minimum density and pronounced shape dependence. The L-mode density limit was found up to 30% higher than for C allowing stable detached divertor operation over a larger density range. Stable H-modes as well as the hybrid scenario could be only re-established when using gas puff levels of a few 10^21e/s. On average the confinement is lower with the new PFCs, but nevertheless, H factors around 1 (H-Mode) and 1.2 (at βN˜3, Hybrids) have been achieved with W concentrations well below the maximum acceptable level (<10-5).
ERIC Educational Resources Information Center
Burde, Dana
2012-01-01
Randomized trials have experienced a marked surge in endorsement and popularity in education research in the past decade. This surge reignited paradigm debates and spurred qualitative critics to accuse these experimental designs of eclipsing qualitative research. This article reviews a current iteration of this debate and examines two randomized…
USDA-ARS?s Scientific Manuscript database
False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises t...
NASA Astrophysics Data System (ADS)
Gryanik, Vladimir M.; Lüpkes, Christof
2018-02-01
In climate and weather prediction models the near-surface turbulent fluxes of heat and momentum and related transfer coefficients are usually parametrized on the basis of Monin-Obukhov similarity theory (MOST). To avoid iteration, required for the numerical solution of the MOST equations, many models apply parametrizations of the transfer coefficients based on an approach relating these coefficients to the bulk Richardson number Rib. However, the parametrizations that are presently used in most climate models are valid only for weaker stability and larger surface roughnesses than those documented during the Surface Heat Budget of the Arctic Ocean campaign (SHEBA). The latter delivered a well-accepted set of turbulence data in the stable surface layer over polar sea-ice. Using stability functions based on the SHEBA data, we solve the MOST equations applying a new semi-analytic approach that results in transfer coefficients as a function of Rib and roughness lengths for momentum and heat. It is shown that the new coefficients reproduce the coefficients obtained by the numerical iterative method with a good accuracy in the most relevant range of stability and roughness lengths. For small Rib, the new bulk transfer coefficients are similar to the traditional coefficients, but for large Rib they are much smaller than currently used coefficients. Finally, a possible adjustment of the latter and the implementation of the new proposed parametrizations in models are discussed.
A novel Iterative algorithm to text segmentation for web born-digital images
NASA Astrophysics Data System (ADS)
Xu, Zhigang; Zhu, Yuesheng; Sun, Ziqiang; Liu, Zhen
2015-07-01
Since web born-digital images have low resolution and dense text atoms, text region over-merging and miss detection are still two open issues to be addressed. In this paper a novel iterative algorithm is proposed to locate and segment text regions. In each iteration, the candidate text regions are generated by detecting Maximally Stable Extremal Region (MSER) with diminishing thresholds, and categorized into different groups based on a new similarity graph, and the texted region groups are identified by applying several features and rules. With our proposed overlap checking method the final well-segmented text regions are selected from these groups in all iterations. Experiments have been carried out on the web born-digital image datasets used for robust reading competition in ICDAR 2011 and 2013, and the results demonstrate that our proposed scheme can significantly reduce both the number of over-merge regions and the lost rate of target atoms, and the overall performance outperforms the best compared with the methods shown in the two competitions in term of recall rate and f-score at the cost of slightly higher computational complexity.
Liao, Yu-Kai; Tseng, Sheng-Hao
2014-01-01
Accurately determining the optical properties of multi-layer turbid media using a layered diffusion model is often a difficult task and could be an ill-posed problem. In this study, an iterative algorithm was proposed for solving such problems. This algorithm employed a layered diffusion model to calculate the optical properties of a layered sample at several source-detector separations (SDSs). The optical properties determined at various SDSs were mutually referenced to complete one round of iteration and the optical properties were gradually revised in further iterations until a set of stable optical properties was obtained. We evaluated the performance of the proposed method using frequency domain Monte Carlo simulations and found that the method could robustly recover the layered sample properties with various layer thickness and optical property settings. It is expected that this algorithm can work with photon transport models in frequency and time domain for various applications, such as determination of subcutaneous fat or muscle optical properties and monitoring the hemodynamics of muscle. PMID:24688828
Gauss-Seidel Iterative Method as a Real-Time Pile-Up Solver of Scintillation Pulses
NASA Astrophysics Data System (ADS)
Novak, Roman; Vencelj, Matja¿
2009-12-01
The pile-up rejection in nuclear spectroscopy has been confronted recently by several pile-up correction schemes that compensate for distortions of the signal and subsequent energy spectra artifacts as the counting rate increases. We study here a real-time capability of the event-by-event correction method, which at the core translates to solving many sets of linear equations. Tight time limits and constrained front-end electronics resources make well-known direct solvers inappropriate. We propose a novel approach based on the Gauss-Seidel iterative method, which turns out to be a stable and cost-efficient solution to improve spectroscopic resolution in the front-end electronics. We show the method convergence properties for a class of matrices that emerge in calorimetric processing of scintillation detector signals and demonstrate the ability of the method to support the relevant resolutions. The sole iteration-based error component can be brought below the sliding window induced errors in a reasonable number of iteration steps, thus allowing real-time operation. An area-efficient hardware implementation is proposed that fully utilizes the method's inherent parallelism.
Conceptual Design of the ITER Plasma Control System
NASA Astrophysics Data System (ADS)
Snipes, J. A.
2013-10-01
The conceptual design of the ITER Plasma Control System (PCS) has been approved and the preliminary design has begun for the 1st plasma PCS. This is a collaboration of many plasma control experts from existing devices to design and test plasma control techniques applicable to ITER on existing machines. The conceptual design considered all phases of plasma operation, ranging from non-active H/He plasmas through high fusion gain inductive DT plasmas to fully non-inductive steady-state operation, to ensure that the PCS control functionality and architecture can satisfy the demands of the ITER Research Plan. The PCS will control plasma equilibrium and density, plasma heat exhaust, a range of MHD instabilities (including disruption mitigation), and the non-inductive current profile required to maintain stable steady-state scenarios. The PCS architecture requires sophisticated shared actuator management and event handling systems to prioritize control goals, algorithms, and actuators according to dynamic control needs and monitor plasma and plant system events to trigger automatic changes in the control algorithms or operational scenario, depending on real-time operating limits and conditions.
Iterative optimization method for design of quantitative magnetization transfer imaging experiments.
Levesque, Ives R; Sled, John G; Pike, G Bruce
2011-09-01
Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.
Cyclic Game Dynamics Driven by Iterated Reasoning
Frey, Seth; Goldstone, Robert L.
2013-01-01
Recent theories from complexity science argue that complex dynamics are ubiquitous in social and economic systems. These claims emerge from the analysis of individually simple agents whose collective behavior is surprisingly complicated. However, economists have argued that iterated reasoning–what you think I think you think–will suppress complex dynamics by stabilizing or accelerating convergence to Nash equilibrium. We report stable and efficient periodic behavior in human groups playing the Mod Game, a multi-player game similar to Rock-Paper-Scissors. The game rewards subjects for thinking exactly one step ahead of others in their group. Groups that play this game exhibit cycles that are inconsistent with any fixed-point solution concept. These cycles are driven by a “hopping” behavior that is consistent with other accounts of iterated reasoning: agents are constrained to about two steps of iterated reasoning and learn an additional one-half step with each session. If higher-order reasoning can be complicit in complex emergent dynamics, then cyclic and chaotic patterns may be endogenous features of real-world social and economic systems. PMID:23441191
Resonance energy transfer process in nanogap-based dual-color random lasing
NASA Astrophysics Data System (ADS)
Shi, Xiaoyu; Tong, Junhua; Liu, Dahe; Wang, Zhaona
2017-04-01
The resonance energy transfer (RET) process between Rhodamine 6G and oxazine in the nanogap-based random systems is systematically studied by revealing the variations and fluctuations of RET coefficients with pump power density. Three working regions stable fluorescence, dynamic laser, and stable laser are thus demonstrated in the dual-color random systems. The stable RET coefficients in fluorescence and lasing regions are generally different and greatly dependent on the donor concentration and the donor-acceptor ratio. These results may provide a way to reveal the energy distribution regulars in the random system and to design the tunable multi-color coherent random lasers for colorful imaging.
A strategy with novel evolutionary features for the iterated prisoner's dilemma.
Li, Jiawei; Kendall, Graham
2009-01-01
In recent iterated prisoner's dilemma tournaments, the most successful strategies were those that had identification mechanisms. By playing a predetermined sequence of moves and learning from their opponents' responses, these strategies managed to identify their opponents. We believe that these identification mechanisms may be very useful in evolutionary games. In this paper one such strategy, which we call collective strategy, is analyzed. Collective strategies apply a simple but efficient identification mechanism (that just distinguishes themselves from other strategies), and this mechanism allows them to only cooperate with their group members and defect against any others. In this way, collective strategies are able to maintain a stable population in evolutionary iterated prisoner's dilemma. By means of an invasion barrier, this strategy is compared with other strategies in evolutionary dynamics in order to demonstrate its evolutionary features. We also find that this collective behavior assists the evolution of cooperation in specific evolutionary environments.
NASA Technical Reports Server (NTRS)
Sankaran, V.
1974-01-01
An iterative procedure for determining the constant gain matrix that will stabilize a linear constant multivariable system using output feedback is described. The use of this procedure avoids the transformation of variables which is required in other procedures. For the case in which the product of the output and input vector dimensions is greater than the number of states of the plant, general solution is given. In the case in which the states exceed the product of input and output vector dimensions, a least square solution which may not be stable in all cases is presented. The results are illustrated with examples.
NASA Astrophysics Data System (ADS)
Wiesen, S.; Köchl, F.; Belo, P.; Kotov, V.; Loarte, A.; Parail, V.; Corrigan, G.; Garzotti, L.; Harting, D.
2017-07-01
The integrated model JINTRAC is employed to assess the dynamic density evolution of the ITER baseline scenario when fuelled by discrete pellets. The consequences on the core confinement properties, α-particle heating due to fusion and the effect on the ITER divertor operation, taking into account the material limitations on the target heat loads, are discussed within the integrated model. Using the model one can observe that stable but cyclical operational regimes can be achieved for a pellet-fuelled ITER ELMy H-mode scenario with Q = 10 maintaining partially detached conditions in the divertor. It is shown that the level of divertor detachment is inversely correlated with the core plasma density due to α-particle heating, and thus depends on the density evolution cycle imposed by pellet ablations. The power crossing the separatrix to be dissipated depends on the enhancement of the transport in the pedestal region being linked with the pressure gradient evolution after pellet injection. The fuelling efficacy of the deposited pellet material is strongly dependent on the E × B plasmoid drift. It is concluded that integrated models like JINTRAC, if validated and supported by realistic physics constraints, may help to establish suitable control schemes of particle and power exhaust in burning ITER DT-plasma scenarios.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Zeng, Ziqiang; Han, Bernard; Lei, Xiao
2013-07-01
This article presents a dynamic programming-based particle swarm optimization (DP-based PSO) algorithm for solving an inventory management problem for large-scale construction projects under a fuzzy random environment. By taking into account the purchasing behaviour and strategy under rules of international bidding, a multi-objective fuzzy random dynamic programming model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform fuzzy random parameters into fuzzy variables that are subsequently defuzzified by using an expected value operator with optimistic-pessimistic index. The iterative nature of the authors' model motivates them to develop a DP-based PSO algorithm. More specifically, their approach treats the state variables as hidden parameters. This in turn eliminates many redundant feasibility checks during initialization and particle updates at each iteration. Results and sensitivity analysis are presented to highlight the performance of the authors' optimization method, which is very effective as compared to the standard PSO algorithm.
Fetal head detection and measurement in ultrasound images by an iterative randomized Hough transform
NASA Astrophysics Data System (ADS)
Lu, Wei; Tan, Jinglu; Floyd, Randall C.
2004-05-01
This paper describes an automatic method for measuring the biparietal diameter (BPD) and head circumference (HC) in ultrasound fetal images. A total of 217 ultrasound images were segmented by using a K-Mean classifier, and the head skull was detected in 214 of the 217 cases by an iterative randomized Hough transform developed for detection of incomplete curves in images with strong noise without user intervention. The automatic measurements were compared with conventional manual measurements by sonographers and a trained panel. The inter-run variations and differences between the automatic and conventional measurements were small compared with published inter-observer variations. The results showed that the automated measurements were as reliable as the expert measurements and more consistent. This method has great potential in clinical applications.
Micromagnetic Simulation of Thermal Effects in Magnetic Nanostructures
2003-01-01
NiFe magnetic nano- elements are calculated. INTRODUCTION With decreasing size of magnetic nanostructures thermal effects become increasingly important...thermal field. The thermal field is assumed to be a Gaussian random process with the following statistical properties : (H,,,(t))=0 and (H,I.(t),H,.1(t...following property DI " =VE(M’’) - [VE(M"’)• t] t =0, for k =1.m (12) 186 The optimal path can be found using an iterative scheme. In each iteration step the
Iterative repair for scheduling and rescheduling
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene; Deale, Michael
1991-01-01
An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.
Challenges and status of ITER conductor production
NASA Astrophysics Data System (ADS)
Devred, A.; Backbier, I.; Bessette, D.; Bevillard, G.; Gardner, M.; Jong, C.; Lillaz, F.; Mitchell, N.; Romano, G.; Vostner, A.
2014-04-01
Taking the relay of the large Hadron collider (LHC) at CERN, ITER has become the largest project in applied superconductivity. In addition to its technical complexity, ITER is also a management challenge as it relies on an unprecedented collaboration of seven partners, representing more than half of the world population, who provide 90% of the components as in-kind contributions. The ITER magnet system is one of the most sophisticated superconducting magnet systems ever designed, with an enormous stored energy of 51 GJ. It involves six of the ITER partners. The coils are wound from cable-in-conduit conductors (CICCs) made up of superconducting and copper strands assembled into a multistage cable, inserted into a conduit of butt-welded austenitic steel tubes. The conductors for the toroidal field (TF) and central solenoid (CS) coils require about 600 t of Nb3Sn strands while the poloidal field (PF) and correction coil (CC) and busbar conductors need around 275 t of Nb-Ti strands. The required amount of Nb3Sn strands far exceeds pre-existing industrial capacity and has called for a significant worldwide production scale up. The TF conductors are the first ITER components to be mass produced and are more than 50% complete. During its life time, the CS coil will have to sustain several tens of thousands of electromagnetic (EM) cycles to high current and field conditions, way beyond anything a large Nb3Sn coil has ever experienced. Following a comprehensive R&D program, a technical solution has been found for the CS conductor, which ensures stable performance versus EM and thermal cycling. Productions of PF, CC and busbar conductors are also underway. After an introduction to the ITER project and magnet system, we describe the ITER conductor procurements and the quality assurance/quality control programs that have been implemented to ensure production uniformity across numerous suppliers. Then, we provide examples of technical challenges that have been encountered and we present the status of ITER conductor production worldwide.
Fixed point theorems and dissipative processes
NASA Technical Reports Server (NTRS)
Hale, J. K.; Lopes, O.
1972-01-01
The deficiencies of the theories that characterize the maximal compact invariant set of T as asymptotically stable, and that some iterate of T has a fixed point are discussed. It is shown that this fixed point condition is always satisfied for condensing and local dissipative T. Applications are given to a class of neutral functional differential equations.
Identification of stable areas in unreferenced laser scans for automated geomorphometric monitoring
NASA Astrophysics Data System (ADS)
Wujanz, Daniel; Avian, Michael; Krueger, Daniel; Neitzel, Frank
2018-04-01
Current research questions in the field of geomorphology focus on the impact of climate change on several processes subsequently causing natural hazards. Geodetic deformation measurements are a suitable tool to document such geomorphic mechanisms, e.g. by capturing a region of interest with terrestrial laser scanners which results in a so-called 3-D point cloud. The main problem in deformation monitoring is the transformation of 3-D point clouds captured at different points in time (epochs) into a stable reference coordinate system. In this contribution, a surface-based registration methodology is applied, termed the iterative closest proximity algorithm (ICProx), that solely uses point cloud data as input, similar to the iterative closest point algorithm (ICP). The aim of this study is to automatically classify deformations that occurred at a rock glacier and an ice glacier, as well as in a rockfall area. For every case study, two epochs were processed, while the datasets notably differ in terms of geometric characteristics, distribution and magnitude of deformation. In summary, the ICProx algorithm's classification accuracy is 70 % on average in comparison to reference data.
The Mixed Finite Element Multigrid Method for Stokes Equations
Muzhinji, K.; Shateyi, S.; Motsa, S. S.
2015-01-01
The stable finite element discretization of the Stokes problem produces a symmetric indefinite system of linear algebraic equations. A variety of iterative solvers have been proposed for such systems in an attempt to construct efficient, fast, and robust solution techniques. This paper investigates one of such iterative solvers, the geometric multigrid solver, to find the approximate solution of the indefinite systems. The main ingredient of the multigrid method is the choice of an appropriate smoothing strategy. This study considers the application of different smoothers and compares their effects in the overall performance of the multigrid solver. We study the multigrid method with the following smoothers: distributed Gauss Seidel, inexact Uzawa, preconditioned MINRES, and Braess-Sarazin type smoothers. A comparative study of the smoothers shows that the Braess-Sarazin smoothers enhance good performance of the multigrid method. We study the problem in a two-dimensional domain using stable Hood-Taylor Q 2-Q 1 pair of finite rectangular elements. We also give the main theoretical convergence results. We present the numerical results to demonstrate the efficiency and robustness of the multigrid method and confirm the theoretical results. PMID:25945361
Lam, King-Yeung; Lou, Yuan
2014-02-01
We consider a mathematical model of two competing species for the evolution of conditional dispersal in a spatially varying, but temporally constant environment. Two species are different only in their dispersal strategies, which are a combination of random dispersal and biased movement upward along the resource gradient. In the absence of biased movement or advection, Hastings showed that the mutant can invade when rare if and only if it has smaller random dispersal rate than the resident. When there is a small amount of biased movement or advection, we show that there is a positive random dispersal rate that is both locally evolutionarily stable and convergent stable. Our analysis of the model suggests that a balanced combination of random and biased movement might be a better habitat selection strategy for populations.
NASA Astrophysics Data System (ADS)
Sumin, M. I.
2015-06-01
A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.
Understanding the stability of the low torque ITER Baseline Scenario in DIII-D
NASA Astrophysics Data System (ADS)
Turco, Francesca
2017-10-01
Analysis of the evolving current density (J), pedestal and rotation profiles in a database of 200 ITER Baseline Scenario discharges in the DIII-D tokamak sheds light on the cause of the disruptive instability limiting both high and low torque operation of these plasmas. The m =2/n =1 tearing modes, occurring after several pressure-relaxation times, are related to the shape of the current profile in the outer region of the plasma. The q =2 surface is located just inside the current pedestal, near a minimum in J. This well in J deepens at constant betaN and at lower rotation, causing the equilibrium to evolve towards a classically unstable state. Lack of core-edge differential rotation likely biases the marginal point towards instability during the secular trend in J. New results from the 2017 experimental campaign establish the first reproducible, stable operation at T =0 Nm for this scenario. A new ramp-up recipe with delayed heating keeps the discharges stable without the need for ECCD stabilization. The J profile shape in the new shots is consistent with an expansion of the previous ``shallow well'' stable operational space. Realtime Active MHD Spectroscopy (AMS) has been applied to IBS plasmas for the first time, and the plasma response measurements show that the AMS can help sense the approach to instability during the discharges. The AMS data shows the trend towards instability at low rotation, and MARS-K modelling partially reproduces the experimental trend if collisionality and resistivity are included. The modelling results are sensitive to the edge resistivity, and this can indicate that the AMS is measuring the changes in ideal (kink) stability, to which the tearing stability index delta' is correlated. Together these results constitute a crucial step to acquire physical understanding and sensing capability for the MHD stability in the Q =10 ITER scenario. Work supported by US DOE under DE-FC02-04ER54698 and DE-FG02-04ER54761.
A decision support model for investment on P2P lending platform.
Zeng, Xiangxiang; Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone.
Evaluating user reputation in online rating systems via an iterative group-based ranking method
NASA Astrophysics Data System (ADS)
Gao, Jian; Zhou, Tao
2017-05-01
Reputation is a valuable asset in online social lives and it has drawn increased attention. Due to the existence of noisy ratings and spamming attacks, how to evaluate user reputation in online rating systems is especially significant. However, most of the previous ranking-based methods either follow a debatable assumption or have unsatisfied robustness. In this paper, we propose an iterative group-based ranking method by introducing an iterative reputation-allocation process into the original group-based ranking method. More specifically, the reputation of users is calculated based on the weighted sizes of the user rating groups after grouping all users by their rating similarities, and the high reputation users' ratings have larger weights in dominating the corresponding user rating groups. The reputation of users and the user rating group sizes are iteratively updated until they become stable. Results on two real data sets with artificial spammers suggest that the proposed method has better performance than the state-of-the-art methods and its robustness is considerably improved comparing with the original group-based ranking method. Our work highlights the positive role of considering users' grouping behaviors towards a better online user reputation evaluation.
A decision support model for investment on P2P lending platform
Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace—Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone. PMID:28877234
NASA Astrophysics Data System (ADS)
Zeng, Lu-Chuan; Yao, Jen-Chih
2006-09-01
Recently, Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447] introduced the new iterative procedures with errors for approximating the common fixed point of a couple of quasi-contractive mappings and showed the stability of these iterative procedures with errors in Banach spaces. In this paper, we introduce a new concept of a couple of q-contractive-like mappings (q>1) in a Banach space and apply these iterative procedures with errors for approximating the common fixed point of the couple of q-contractive-like mappings. The results established in this paper improve, extend and unify the corresponding ones of Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447], Chidume [C.E. Chidume, Approximation of fixed points of quasi-contractive mappings in Lp spaces, Indian J. Pure Appl. Math. 22 (1991) 273-386], Chidume and Osilike [C.E. Chidume, M.O. Osilike, Fixed points iterations for quasi-contractive maps in uniformly smooth Banach spaces, Bull. Korean Math. Soc. 30 (1993) 201-212], Liu [Q.H. Liu, On Naimpally and Singh's open questions, J. Math. Anal. Appl. 124 (1987) 157-164; Q.H. Liu, A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings, J. Math. Anal. Appl. 146 (1990) 301-305], Osilike [M.O. Osilike, A stable iteration procedure for quasi-contractive maps, Indian J. Pure Appl. Math. 27 (1996) 25-34; M.O. Osilike, Stability of the Ishikawa iteration method for quasi-contractive maps, Indian J. Pure Appl. Math. 28 (1997) 1251-1265] and many others in the literature.
Simulating transient dynamics of the time-dependent time fractional Fokker-Planck systems
NASA Astrophysics Data System (ADS)
Kang, Yan-Mei
2016-09-01
For a physically realistic type of time-dependent time fractional Fokker-Planck (FP) equation, derived as the continuous limit of the continuous time random walk with time-modulated Boltzmann jumping weight, a semi-analytic iteration scheme based on the truncated (generalized) Fourier series is presented to simulate the resultant transient dynamics when the external time modulation is a piece-wise constant signal. At first, the iteration scheme is demonstrated with a simple time-dependent time fractional FP equation on finite interval with two absorbing boundaries, and then it is generalized to the more general time-dependent Smoluchowski-type time fractional Fokker-Planck equation. The numerical examples verify the efficiency and accuracy of the iteration method, and some novel dynamical phenomena including polarized motion orientations and periodic response death are discussed.
The Iterated Classification Game: A New Model of the Cultural Transmission of Language
Swarup, Samarth; Gasser, Les
2010-01-01
The Iterated Classification Game (ICG) combines the Classification Game with the Iterated Learning Model (ILM) to create a more realistic model of the cultural transmission of language through generations. It includes both learning from parents and learning from peers. Further, it eliminates some of the chief criticisms of the ILM: that it does not study grounded languages, that it does not include peer learning, and that it builds in a bias for compositional languages. We show that, over the span of a few generations, a stable linguistic system emerges that can be acquired very quickly by each generation, is compositional, and helps the agents to solve the classification problem with which they are faced. The ICG also leads to a different interpretation of the language acquisition process. It suggests that the role of parents is to initialize the linguistic system of the child in such a way that subsequent interaction with peers results in rapid convergence to the correct language. PMID:20190877
Brownian motion properties of optoelectronic random bit generators based on laser chaos.
Li, Pu; Yi, Xiaogang; Liu, Xianglian; Wang, Yuncai; Wang, Yongge
2016-07-11
The nondeterministic property of the optoelectronic random bit generator (RBG) based on laser chaos are experimentally analyzed from two aspects of the central limit theorem and law of iterated logarithm. The random bits are extracted from an optical feedback chaotic laser diode using a multi-bit extraction technique in the electrical domain. Our experimental results demonstrate that the generated random bits have no statistical distance from the Brownian motion, besides that they can pass the state-of-the-art industry-benchmark statistical test suite (NIST SP800-22). All of them give a mathematically provable evidence that the ultrafast random bit generator based on laser chaos can be used as a nondeterministic random bit source.
Polarimetric signatures of a coniferous forest canopy based on vector radiative transfer theory
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.; Amar, F.; Mougin, E.; Lopes, A.; Beaudoin, A.
1992-01-01
Complete polarization signatures of a coniferous forest canopy are studied by the iterative solution of the vector radiative transfer equations up to the second order. The forest canopy constituents (leaves, branches, stems, and trunk) are embedded in a multi-layered medium over a rough interface. The branches, stems and trunk scatterers are modeled as finite randomly oriented cylinders. The leaves are modeled as randomly oriented needles. For a plane wave exciting the canopy, the average Mueller matrix is formulated in terms of the iterative solution of the radiative transfer solution and used to determine the linearly polarized backscattering coefficients, the co-polarized and cross-polarized power returns, and the phase difference statistics. Numerical results are presented to investigate the effect of transmitting and receiving antenna configurations on the polarimetric signature of a pine forest. Comparison is made with measurements.
Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T
The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P < 0.05). Adaptive statistical iterative reconstruction-V 90% showed superior LCD and had the highest CNR in the liver, aorta, and, pancreas, measuring 7.32 ± 3.22, 11.60 ± 4.25, and 4.60 ± 2.31, respectively, compared with the next best series of ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P <0.0001). Veo 3.0 and ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.
Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering
NASA Astrophysics Data System (ADS)
Bruno, Marcelo G. S.; Dias, Stiven S.
2014-12-01
We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.
Iterative Monte Carlo analysis of spin-dependent parton distributions
Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; ...
2016-04-05
We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d 2 moment of the nucleon within a global PDF analysis.« less
Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter
Zhao, Qiang; Du, Qizhen; Gong, Xufei; ...
2018-04-06
Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less
Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Qiang; Du, Qizhen; Gong, Xufei
Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-07-01
X-ray micro- and nanotomography has evolved into a quantitative analysis tool rather than a mere qualitative visualization technique for the study of porous natural materials. Tomographic reconstructions are subject to noise that has to be handled by image filters prior to quantitative analysis. Typically, denoising filters are designed to handle random noise, such as Gaussian or Poisson noise. In tomographic reconstructions, noise has been projected from Radon space to Euclidean space, i.e. post reconstruction noise cannot be expected to be random but to be correlated. Reconstruction artefacts, such as streak or ring artefacts, aggravate the filtering process so algorithms performing well with random noise are not guaranteed to provide satisfactory results for X-ray tomography reconstructions. With sufficient image resolution, the crystalline origin of most geomaterials results in tomography images of objects that are untextured. We developed a denoising framework for these kinds of samples that combines a noise level estimate with iterative nonlocal means denoising. This allows splitting the denoising task into several weak denoising subtasks where the later filtering steps provide a controlled level of texture removal. We describe a hands-on explanation for the use of this iterative denoising approach and the validity and quality of the image enhancement filter was evaluated in a benchmarking experiment with noise footprints of a varying level of correlation and residual artefacts. They were extracted from real tomography reconstructions. We found that our denoising solutions were superior to other denoising algorithms, over a broad range of contrast-to-noise ratios on artificial piecewise constant signals.
3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei, Yuru, E-mail: peiyuru@cis.pku.edu.cn; Ai, Xin
Purpose: Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. Methods: The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3Dmore » exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. Results: The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. Conclusions: The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.« less
3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images.
Pei, Yuru; Ai, Xingsheng; Zha, Hongbin; Xu, Tianmin; Ma, Gengyu
2016-09-01
Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.
Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella
2016-12-09
Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.
Shen, Junlin; Du, Xiangying; Guo, Daode; Cao, Lizhen; Gao, Yan; Yang, Qi; Li, Pengyu; Liu, Jiabin; Li, Kuncheng
2013-01-01
Objectives To evaluate the clinical value of noise-based tube current reduction method with iterative reconstruction for obtaining consistent image quality with dose optimization in prospective electrocardiogram (ECG)-triggered coronary CT angiography (CCTA). Materials and Methods We performed a prospective randomized study evaluating 338 patients undergoing CCTA with prospective ECG-triggering. Patients were randomly assigned to fixed tube current with filtered back projection (Group 1, n = 113), noise-based tube current with filtered back projection (Group 2, n = 109) or with iterative reconstruction (Group 3, n = 116). Tube voltage was fixed at 120 kV. Qualitative image quality was rated on a 5-point scale (1 = impaired, to 5 = excellent, with 3–5 defined as diagnostic). Image noise and signal intensity were measured; signal-to-noise ratio was calculated; radiation dose parameters were recorded. Statistical analyses included one-way analysis of variance, chi-square test, Kruskal-Wallis test and multivariable linear regression. Results Image noise was maintained at the target value of 35HU with small interquartile range for Group 2 (35.00–35.03HU) and Group 3 (34.99–35.02HU), while from 28.73 to 37.87HU for Group 1. All images in the three groups were acceptable for diagnosis. A relative 20% and 51% reduction in effective dose for Group 2 (2.9 mSv) and Group 3 (1.8 mSv) were achieved compared with Group 1 (3.7 mSv). After adjustment for scan characteristics, iterative reconstruction was associated with 26% reduction in effective dose. Conclusion Noise-based tube current reduction method with iterative reconstruction maintains image noise precisely at the desired level and achieves consistent image quality. Meanwhile, effective dose can be reduced by more than 50%. PMID:23741444
Interferometric tomography of continuous fields with incomplete projections
NASA Technical Reports Server (NTRS)
Cha, Soyoung S.; Sun, Hogwei
1988-01-01
Interferometric tomography in the presence of an opaque object is investigated. The developed iterative algorithm does not need to augment the missing information. It is based on the successive reconstruction of the difference field, the difference between the object field to be reconstructed and its estimate, only in the difined region. The application of the algorithm results in stable convergence.
Ideal MHD stability and performance of ITER steady-state scenarios with ITBs
NASA Astrophysics Data System (ADS)
Poli, F. M.; Kessel, C. E.; Chance, M. S.; Jardin, S. C.; Manickam, J.
2012-06-01
Non-inductive steady-state scenarios on ITER will need to operate with internal transport barriers (ITBs) in order to reach adequate fusion gain at typical currents of 9 MA. The large pressure gradients at the location of the internal barrier are conducive to the development of ideal MHD instabilities that may limit the plasma performance and may lead to plasma disruptions. Fully non-inductive scenario simulations with five combinations of heating and current drive sources are presented in this work, with plasma currents in the range 7-10 MA. For each configuration the linear, ideal MHD stability is analysed for variations of the Greenwald fraction and of the pressure peaking factor around the operating point, aiming at defining an operational space for stable, steady-state operations at optimized performance. It is shown that plasmas with lower hybrid heating and current drive maintain the minimum safety factor above 1.5, which is desirable in steady-state operations to avoid neoclassical tearing modes. Operating with moderate ITBs at 2/3 of the minor radius, these plasmas have a minimum safety factor above 2, are ideal MHD stable and reach Q ≳ 5 operating above the ideal no-wall limit.
A path to stable low-torque plasma operation in ITER with test blanket modules
Lanctot, Matthew J.; Snipes, J. A.; Reimerdes, H.; ...
2016-12-12
New experiments in the low-torque ITER Q = 10 scenario on DIII-D demonstrate that n = 1 magnetic fields from a single row of ex-vessel control coils enable operation at ITER performance metrics in the presence of applied non-axisymmetric magnetic fields from a test blanket module (TBM) mock-up coil. With n = 1 compensation, operation below the ITER-equivalent injected torque is successful at three times the ITER equivalent toroidal magnetic field ripple for a pair of TBMs in one equatorial port, whereas the uncompensated TBM field leads to rotation collapse, loss of H-mode and plasma current disruption. In companion experimentsmore » at high plasma beta, where the n = 1 plasma response is enhanced, uncorrected TBM fields degrade energy confinement and the plasma angular momentum while increasing fast ion losses; however, disruptions are not routinely encountered owing to increased levels of injected neutral beam torque. In this regime, n = 1 field compensation leads to recovery of a dominant fraction of the TBM-induced plasma pressure and rotation degradation, and an 80% reduction in the heat load to the first wall. These results show that the n = 1 plasma response plays a dominant role in determining plasma stability, and that n = 1 field compensation alone not only recovers most of the impact on plasma performance of the TBM, but also protects the first wall from potentially damaging heat flux. Despite these benefits, plasma rotation braking from the TBM fields cannot be fully recovered using standard error field control. Lastly, given the uncertainty in extrapolation of these results to the ITER configuration, it is prudent to design the TBMs with as low a ferromagnetic mass as possible without jeopardizing the TBM mission.« less
On the assessment of spatial resolution of PET systems with iterative image reconstruction
NASA Astrophysics Data System (ADS)
Gong, Kuang; Cherry, Simon R.; Qi, Jinyi
2016-03-01
Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.
A path to stable low-torque plasma operation in ITER with test blanket modules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lanctot, Matthew J.; Snipes, J. A.; Reimerdes, H.
New experiments in the low-torque ITER Q = 10 scenario on DIII-D demonstrate that n = 1 magnetic fields from a single row of ex-vessel control coils enable operation at ITER performance metrics in the presence of applied non-axisymmetric magnetic fields from a test blanket module (TBM) mock-up coil. With n = 1 compensation, operation below the ITER-equivalent injected torque is successful at three times the ITER equivalent toroidal magnetic field ripple for a pair of TBMs in one equatorial port, whereas the uncompensated TBM field leads to rotation collapse, loss of H-mode and plasma current disruption. In companion experimentsmore » at high plasma beta, where the n = 1 plasma response is enhanced, uncorrected TBM fields degrade energy confinement and the plasma angular momentum while increasing fast ion losses; however, disruptions are not routinely encountered owing to increased levels of injected neutral beam torque. In this regime, n = 1 field compensation leads to recovery of a dominant fraction of the TBM-induced plasma pressure and rotation degradation, and an 80% reduction in the heat load to the first wall. These results show that the n = 1 plasma response plays a dominant role in determining plasma stability, and that n = 1 field compensation alone not only recovers most of the impact on plasma performance of the TBM, but also protects the first wall from potentially damaging heat flux. Despite these benefits, plasma rotation braking from the TBM fields cannot be fully recovered using standard error field control. Lastly, given the uncertainty in extrapolation of these results to the ITER configuration, it is prudent to design the TBMs with as low a ferromagnetic mass as possible without jeopardizing the TBM mission.« less
A random rule model of surface growth
NASA Astrophysics Data System (ADS)
Mello, Bernardo A.
2015-02-01
Stochastic models of surface growth are usually based on randomly choosing a substrate site to perform iterative steps, as in the etching model, Mello et al. (2001) [5]. In this paper I modify the etching model to perform sequential, instead of random, substrate scan. The randomicity is introduced not in the site selection but in the choice of the rule to be followed in each site. The change positively affects the study of dynamic and asymptotic properties, by reducing the finite size effect and the short-time anomaly and by increasing the saturation time. It also has computational benefits: better use of the cache memory and the possibility of parallel implementation.
PyCCF: Python Cross Correlation Function for reverberation mapping studies
NASA Astrophysics Data System (ADS)
Sun, Mouyuan; Grier, C. J.; Peterson, B. M.
2018-05-01
PyCCF emulates a Fortran program written by B. Peterson for use with reverberation mapping. The code cross correlates two light curves that are unevenly sampled using linear interpolation and measures the peak and centroid of the cross-correlation function. In addition, it is possible to run Monto Carlo iterations using flux randomization and random subset selection (RSS) to produce cross-correlation centroid distributions to estimate the uncertainties in the cross correlation results.
NASA Astrophysics Data System (ADS)
Tour, James M.; Schumm, Jeffrey S.; Pearson, Darren L.
1994-06-01
Described is the synthesis of oligo (2-ethylphenylene ethynylene)s and oligo (2-(3'ethylheptyl) phenylene ethynylene)s via an iterative divergent convergent approach. Synthesized were the monomer, dimer, tetramer, and octamer of the ethyl derivative and the monomer, dimer, tetramer, octamer, and 16-mer of the ethylheptyl derivative. The 16-mer is 128 A long. At each stage in the iteration, the length of the framework doubles. Only three sets of reaction conditions are needed for the entire iterative synthetic sequence; an iodination, a protodesilylation, and a Pd/Cu-catalyzed cross coupling. The oligomers were characterized spectroscopically and by mass spectrometry. The optical properties are presented which show the stage of optical absorbance saturation. The size exclusion chromatography values for the number average weights, relative to polystyrene, illustrate the tremendous differences in the hydrodynamic volume of these rigid rod oligomers verses the random coils of polystyrene. These differences become quite apparent at the octamer stage. These oligomers may act as molecular wires in molecular electronic devices and they also serve as useful models for understanding related bulk polymers.
Random walks with shape prior for cochlea segmentation in ex vivo μCT.
Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel Angel
2016-09-01
Cochlear implantation is a safe and effective surgical procedure to restore hearing in deaf patients. However, the level of restoration achieved may vary due to differences in anatomy, implant type and surgical access. In order to reduce the variability of the surgical outcomes, we previously proposed the use of a high-resolution model built from [Formula: see text] images and then adapted to patient-specific clinical CT scans. As the accuracy of the model is dependent on the precision of the original segmentation, it is extremely important to have accurate [Formula: see text] segmentation algorithms. We propose a new framework for cochlea segmentation in ex vivo [Formula: see text] images using random walks where a distance-based shape prior is combined with a region term estimated by a Gaussian mixture model. The prior is also weighted by a confidence map to adjust its influence according to the strength of the image contour. Random walks is performed iteratively, and the prior mask is aligned in every iteration. We tested the proposed approach in ten [Formula: see text] data sets and compared it with other random walks-based segmentation techniques such as guided random walks (Eslami et al. in Med Image Anal 17(2):236-253, 2013) and constrained random walks (Li et al. in Advances in image and video technology. Springer, Berlin, pp 215-226, 2012). Our approach demonstrated higher accuracy results due to the probability density model constituted by the region term and shape prior information weighed by a confidence map. The weighted combination of the distance-based shape prior with a region term into random walks provides accurate segmentations of the cochlea. The experiments suggest that the proposed approach is robust for cochlea segmentation.
Coherent random lasing controlled by Brownian motion of the active scatterer
NASA Astrophysics Data System (ADS)
Liang, Shuofeng; Yin, Leicheng; Zhang, ZhenZhen; Xia, Jiangying; Xie, Kang; Zou, Gang; Hu, Zhijia; Zhang, Qijin
2018-05-01
The stability of the scattering loop is fundamental for coherent random lasing in a dynamic scattering system. In this work, fluorescence of DPP (N, N-di [3-(isobutyl polyhedral oligomeric silsesquioxanes) propyl] perylene diimide) is scattered to produce RL and we realize the transition from incoherent RL to coherent RL by controlling the Brownian motion of the scatterers (dimer aggregates of DPP) and the stability of scattering loop. To produce coherent random lasers, the loop needs to maintain a stable state within the loop-stable time, which can be determined through controlled Brownian motion of scatterers in the scattering system. The result shows that the loop-stable time is within 5.83 × 10‑5 s to 1.61 × 10‑4 s based on the transition from coherent to incoherent random lasing. The time range could be tuned by finely controlling the viscosity of the solution. This work not only develops a method to predict the loop-stable time, but also develops the study between Brownian motion and random lasers, which opens the road to a variety of novel interdisciplinary investigations involving modern statistical mechanics and disordered photonics.
USDA-ARS?s Scientific Manuscript database
Calcium supplementation is a widely recognized strategy for achieving adequate calcium intake. We designed this blinded, randomized, crossover interventional trial to compare the bioavailability of a new stable synthetic amorphous calcium carbonate (ACC) with that of crystalline calcium carbonate (C...
František Nábělek's Iter Turcico-Persicum 1909-1910 - database and digitized herbarium collection.
Kempa, Matúš; Edmondson, John; Lack, Hans Walter; Smatanová, Janka; Marhold, Karol
2016-01-01
The Czech botanist František Nábělek (1884-1965) explored the Middle East in 1909-1910, visiting what are now Israel, Palestine, Jordan, Syria, Lebanon, Iraq, Bahrain, Iran and Turkey. He described four new genera, 78 species, 69 varieties and 38 forms of vascular plants, most of these in his work Iter Turcico-Persicum (1923-1929). The main herbarium collection of Iter Turcico-Persicum comprises 4163 collection numbers (some with duplicates), altogether 6465 specimens. It is currently deposited in the herbarium SAV. In addition, some fragments and duplicates are found in B, E, W and WU. The whole collection at SAV was recently digitized and both images and metadata are available via web portal www.nabelek.sav.sk, and through JSTOR Global Plants and the Biological Collection Access Service. Most localities were georeferenced and the web portal provides a mapping facility. Annotation of specimens is available via the AnnoSys facility. For each specimen a CETAF stable identifier is provided enabling the correct reference to the image and metadata.
NASA Astrophysics Data System (ADS)
Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong
2018-01-01
In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.
František Nábělek’s Iter Turcico-Persicum 1909–1910 – database and digitized herbarium collection
Kempa, Matúš; Edmondson, John; Lack, Hans Walter; Smatanová, Janka; Marhold, Karol
2016-01-01
Abstract The Czech botanist František Nábělek (1884−1965) explored the Middle East in 1909-1910, visiting what are now Israel, Palestine, Jordan, Syria, Lebanon, Iraq, Bahrain, Iran and Turkey. He described four new genera, 78 species, 69 varieties and 38 forms of vascular plants, most of these in his work Iter Turcico-Persicum (1923−1929). The main herbarium collection of Iter Turcico-Persicum comprises 4163 collection numbers (some with duplicates), altogether 6465 specimens. It is currently deposited in the herbarium SAV. In addition, some fragments and duplicates are found in B, E, W and WU. The whole collection at SAV was recently digitized and both images and metadata are available via web portal www.nabelek.sav.sk, and through JSTOR Global Plants and the Biological Collection Access Service. Most localities were georeferenced and the web portal provides a mapping facility. Annotation of specimens is available via the AnnoSys facility. For each specimen a CETAF stable identifier is provided enabling the correct reference to the image and metadata. PMID:28127245
Improving performances of suboptimal greedy iterative biclustering heuristics via localization.
Erten, Cesim; Sözdinler, Melih
2010-10-15
Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.
A novel dynamical community detection algorithm based on weighting scheme
NASA Astrophysics Data System (ADS)
Li, Ju; Yu, Kai; Hu, Ke
2015-12-01
Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.
Proceedings of Colloquium on Stable Solutions of Some Ill-Posed Problems, October 9, 1979.
1980-06-30
4. In (24] iterative process (9) was applied for calculation of the magnetization of thin magnetic films . This problem is of interest for computer...equation fl I (x-t) -f(t) = g(x), x > 1. (i) Its multidimensional analogue fmX-tK-if(t)dt = g(x), xEA, AnD (2) can be intepreted as the problem of
First operation with the JET International Thermonuclear Experimental Reactor-like walla)
NASA Astrophysics Data System (ADS)
Neu, R.; Arnoux, G.; Beurskens, M.; Bobkov, V.; Brezinsek, S.; Bucalossi, J.; Calabro, G.; Challis, C.; Coenen, J. W.; de la Luna, E.; de Vries, P. C.; Dux, R.; Frassinetti, L.; Giroud, C.; Groth, M.; Hobirk, J.; Joffrin, E.; Lang, P.; Lehnen, M.; Lerche, E.; Loarer, T.; Lomas, P.; Maddison, G.; Maggi, C.; Matthews, G.; Marsen, S.; Mayoral, M.-L.; Meigs, A.; Mertens, Ph.; Nunes, I.; Philipps, V.; Pütterich, T.; Rimini, F.; Sertoli, M.; Sieglin, B.; Sips, A. C. C.; van Eester, D.; van Rooij, G.; JET-EFDA Contributors
2013-05-01
To consolidate International Thermonuclear Experimental Reactor (ITER) design choices and prepare for its operation, Joint European Torus (JET) has implemented ITER's plasma facing materials, namely, Be for the main wall and W in the divertor. In addition, protection systems, diagnostics, and the vertical stability control were upgraded and the heating capability of the neutral beams was increased to over 30 MW. First results confirm the expected benefits and the limitations of all metal plasma facing components (PFCs) but also yield understanding of operational issues directly relating to ITER. H-retention is lower by at least a factor of 10 in all operational scenarios compared to that with C PFCs. The lower C content (≈ factor 10) has led to much lower radiation during the plasma burn-through phase eliminating breakdown failures. Similarly, the intrinsic radiation observed during disruptions is very low, leading to high power loads and to a slow current quench. Massive gas injection using a D2/Ar mixture restores levels of radiation and vessel forces similar to those of mitigated disruptions with the C wall. Dedicated L-H transition experiments indicate a 30% power threshold reduction, a distinct minimum density, and a pronounced shape dependence. The L-mode density limit was found to be up to 30% higher than for C allowing stable detached divertor operation over a larger density range. Stable H-modes as well as the hybrid scenario could be re-established only when using gas puff levels of a few 1021 es-1. On average, the confinement is lower with the new PFCs, but nevertheless, H factors up to 1 (H-Mode) and 1.3 (at βN≈3, hybrids) have been achieved with W concentrations well below the maximum acceptable level.
Development Testing and Subsequent Failure Investigation of a Spring Strut Mechanism
NASA Technical Reports Server (NTRS)
Dervan, Jared; Robertson, Brandan; Staab, Lucas; Culberson, Michael; Pellicciotti, Joseph
2014-01-01
The NASA Engineering and Safety Center (NESC) and Lockheed Martin (LM) performed random vibration testing on a single spring strut development unit to assess its ability to withstand qualification level random vibration environments. Failure of the strut while exposed to random vibration resulted in a follow-on failure investigation, design changes, and additional development tests. This paper focuses on the results of the failure investigations referenced in detail in the NESC final report [1] including identified lessons learned to aid in future design iterations of the spring strut and to help other mechanism developers avoid similar pitfalls.
Development Testing and Subsequent Failure Investigation of a Spring Strut Mechanism
NASA Technical Reports Server (NTRS)
Dervan, Jared; Robertson, Brandon; Staab, Lucas; Culberson, Michael; Pellicciotti, Joseph
2014-01-01
The NASA Engineering and Safety Center (NESC) and Lockheed Martin (LM) performed random vibration testing on a single spring strut development unit to assess its ability to withstand qualification level random vibration environments. Failure of the strut while exposed to random vibration resulted in a follow-on failure investigation, design changes, and additional development tests. This paper focuses on the results of the failure investigations referenced in detail in the NESC final report including identified lessons learned to aid in future design iterations of the spring strut and to help other mechanism developers avoid similar pitfalls.
Assessing the limitations of the Banister model in monitoring training
Hellard, Philippe; Avalos, Marta; Lacoste, Lucien; Barale, Frédéric; Chatard, Jean-Claude; Millet, Grégoire P.
2006-01-01
The aim of this study was to carry out a statistical analysis of the Banister model to verify how useful it is in monitoring the training programmes of elite swimmers. The accuracy, the ill-conditioning and the stability of this model were thus investigated. Training loads of nine elite swimmers, measured over one season, were related to performances with the Banister model. Firstly, to assess accuracy, the 95% bootstrap confidence interval (95% CI) of parameter estimates and modelled performances were calculated. Secondly, to study ill-conditioning, the correlation matrix of parameter estimates was computed. Finally, to analyse stability, iterative computation was performed with the same data but minus one performance, chosen randomly. Performances were significantly related to training loads in all subjects (R2= 0.79 ± 0.13, P < 0.05) and the estimation procedure seemed to be stable. Nevertheless, the 95% CI of the most useful parameters for monitoring training were wide τa =38 (17, 59), τf =19 (6, 32), tn =19 (7, 35), tg =43 (25, 61). Furthermore, some parameters were highly correlated making their interpretation worthless. The study suggested possible ways to deal with these problems and reviewed alternative methods to model the training-performance relationships. PMID:16608765
Collective iteration behavior for online social networks
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Li, Ren-De; Guo, Qiang; Zhang, Yi-Cheng
2018-06-01
Understanding the patterns of collective behavior in online social network (OSNs) is critical to expanding the knowledge of human behavior and tie relationship. In this paper, we investigate a specific pattern called social signature in Facebook and Wiki users' online communication behaviors, capturing the distribution of frequency of interactions between different alters over time in the ego network. The empirical results show that there are robust social signatures of interactions no matter how friends change over time, which indicates that a stable commutation pattern exists in online communication. By comparing a random null model, we find the that commutation pattern is heterogeneous between ego and alters. Furthermore, in order to regenerate the pattern of the social signature, we present a preferential interaction model, which assumes that new users intend to look for the old users with strong ties while old users have tendency to interact with new friends. The experimental results show that the presented model can reproduce the heterogeneity of social signature by adjusting 2 parameters, the number of communicating targets m and the max number of interactions n, for Facebook users, m = n = 5, for Wiki users, m = 2 and n = 8. This work helps in deeply understanding the regularity of social signature.
Computer Modeling of High-Intensity Cs-Sputter Ion Sources
NASA Astrophysics Data System (ADS)
Brown, T. A.; Roberts, M. L.; Southon, J. R.
The grid-point mesh program NEDLab has been used to computer model the interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS), with the goal of improving negative ion output. NEDLab has several features that are important to realistic modeling of such sources. First, space-charge effects are incorporated in the calculations through an automated ion-trajectories/Poissonelectric-fields successive-iteration process. Second, space charge distributions can be averaged over successive iterations to suppress model instabilities. Third, space charge constraints on ion emission from surfaces can be incorporate under Child's Law based algorithms. Fourth, the energy of ions emitted from a surface can be randomly chosen from within a thermal energy distribution. And finally, ions can be emitted from a surface at randomized angles The results of our modeling effort indicate that significant modification of the interior geometry of the source will double Cs+ ion production from our spherical ionizer and produce a significant increase in negative ion output from the source.
Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Pan, Xuemei; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2015-04-10
A multiple-image authentication method with a cascaded multilevel architecture in the Fresnel domain is proposed, in which a synthetic encoded complex amplitude is first fabricated, and its real amplitude component is generated by iterative amplitude encoding, random sampling, and space multiplexing for the low-level certification images, while the phase component of the synthetic encoded complex amplitude is constructed by iterative phase information encoding and multiplexing for the high-level certification images. Then the synthetic encoded complex amplitude is iteratively encoded into two phase-type ciphertexts located in two different transform planes. During high-level authentication, when the two phase-type ciphertexts and the high-level decryption key are presented to the system and then the Fresnel transform is carried out, a meaningful image with good quality and a high correlation coefficient with the original certification image can be recovered in the output plane. Similar to the procedure of high-level authentication, in the case of low-level authentication with the aid of a low-level decryption key, no significant or meaningful information is retrieved, but it can result in a remarkable peak output in the nonlinear correlation coefficient of the output image and the corresponding original certification image. Therefore, the method realizes different levels of accessibility to the original certification image for different authority levels with the same cascaded multilevel architecture.
Modeling and statistical analysis of non-Gaussian random fields with heavy-tailed distributions.
Nezhadhaghighi, Mohsen Ghasemi; Nakhlband, Abbas
2017-04-01
In this paper, we investigate and develop an alternative approach to the numerical analysis and characterization of random fluctuations with the heavy-tailed probability distribution function (PDF), such as turbulent heat flow and solar flare fluctuations. We identify the heavy-tailed random fluctuations based on the scaling properties of the tail exponent of the PDF, power-law growth of qth order correlation function, and the self-similar properties of the contour lines in two-dimensional random fields. Moreover, this work leads to a substitution for the fractional Edwards-Wilkinson (EW) equation that works in the presence of μ-stable Lévy noise. Our proposed model explains the configuration dynamics of the systems with heavy-tailed correlated random fluctuations. We also present an alternative solution to the fractional EW equation in the presence of μ-stable Lévy noise in the steady state, which is implemented numerically, using the μ-stable fractional Lévy motion. Based on the analysis of the self-similar properties of contour loops, we numerically show that the scaling properties of contour loop ensembles can qualitatively and quantitatively distinguish non-Gaussian random fields from Gaussian random fluctuations.
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousedík, Bedřich, E-mail: sousedik@umbc.edu; Elman, Howard C., E-mail: elman@cs.umd.edu
2016-07-01
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
Sousedík, Bedřich; Elman, Howard C.
2016-04-12
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less
A stable and accurate partitioned algorithm for conjugate heat transfer
NASA Astrophysics Data System (ADS)
Meng, F.; Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.
2017-09-01
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in an implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems together with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode theory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized-Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and diffusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. The CHAMP scheme is also developed for general curvilinear grids and CHT examples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.
Glaholt, Stephen P; Chen, Celia Y; Demidenko, Eugene; Bugge, Deenie M; Folt, Carol L; Shaw, Joseph R
2012-08-15
The study of stressor interactions by eco-toxicologists using nonlinear response variables is limited by required amounts of a priori knowledge, complexity of experimental designs, the use of linear models, and the lack of use of optimal designs of nonlinear models to characterize complex interactions. Therefore, we developed AID, an adaptive-iterative design for eco-toxicologist to more accurately and efficiently examine complex multiple stressor interactions. AID incorporates the power of the general linear model and A-optimal criteria with an iterative process that: 1) minimizes the required amount of a priori knowledge, 2) simplifies the experimental design, and 3) quantifies both individual and interactive effects. Once a stable model is determined, the best fit model is identified and the direction and magnitude of stressors, individually and all combinations (including complex interactions) are quantified. To validate AID, we selected five commonly co-occurring components of polluted aquatic systems, three metal stressors (Cd, Zn, As) and two water chemistry parameters (pH, hardness) to be tested using standard acute toxicity tests in which Daphnia mortality is the (nonlinear) response variable. We found after the initial data input of experimental data, although literature values (e.g. EC-values) may also be used, and after only two iterations of AID, our dose response model was stable. The model ln(Cd)*ln(Zn) was determined the best predictor of Daphnia mortality response to the combined effects of Cd, Zn, As, pH, and hardness. This model was then used to accurately identify and quantify the strength of both greater- (e.g. As*Cd) and less-than additive interactions (e.g. Cd*Zn). Interestingly, our study found only binary interactions significant, not higher order interactions. We conclude that AID is more efficient and effective at assessing multiple stressor interactions than current methods. Other applications, including life-history endpoints commonly used by regulators, could benefit from AID's efficiency in assessing water quality criteria. Copyright © 2012 Elsevier B.V. All rights reserved.
A stable and accurate partitioned algorithm for conjugate heat transfer
Meng, F.; Banks, J. W.; Henshaw, W. D.; ...
2017-04-25
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
Convergence of Defect-Correction and Multigrid Iterations for Inviscid Flows
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2011-01-01
Convergence of multigrid and defect-correction iterations is comprehensively studied within different incompressible and compressible inviscid regimes on high-density grids. Good smoothing properties of the defect-correction relaxation have been shown using both a modified Fourier analysis and a more general idealized-coarse-grid analysis. Single-grid defect correction alone has some slowly converging iterations on grids of medium density. The convergence is especially slow for near-sonic flows and for very low compressible Mach numbers. Additionally, the fast asymptotic convergence seen on medium density grids deteriorates on high-density grids. Certain downstream-boundary modes are very slowly damped on high-density grids. Multigrid scheme accelerates convergence of the slow defect-correction iterations to the extent determined by the coarse-grid correction. The two-level asymptotic convergence rates are stable and significantly below one in most of the regions but slow convergence is noted for near-sonic and very low-Mach compressible flows. Multigrid solver has been applied to the NACA 0012 airfoil and to different flow regimes, such as near-tangency and stagnation. Certain convergence difficulties have been encountered within stagnation regions. Nonetheless, for the airfoil flow, with a sharp trailing-edge, residuals were fast converging for a subcritical flow on a sequence of grids. For supercritical flow, residuals converged slower on some intermediate grids than on the finest grid or the two coarsest grids.
Static RAM data recorder for flight tests
NASA Astrophysics Data System (ADS)
Stoner, D. C.; Eklund, T. F. F.
A static random access memory (RAM) data recorder has been developed to recover strain and acceleration data during development tests of high-speed earth penetrating vehicles. Bilevel inputs are also available for continuity measurements. An iteration of this system was modified for use on water entry evaluations.
Sparse magnetic resonance imaging reconstruction using the bregman iteration
NASA Astrophysics Data System (ADS)
Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo
2013-01-01
Magnetic resonance imaging (MRI) reconstruction needs many samples that are sequentially sampled by using phase encoding gradients in a MRI system. It is directly connected to the scan time for the MRI system and takes a long time. Therefore, many researchers have studied ways to reduce the scan time, especially, compressed sensing (CS), which is used for sparse images and reconstruction for fewer sampling datasets when the k-space is not fully sampled. Recently, an iterative technique based on the bregman method was developed for denoising. The bregman iteration method improves on total variation (TV) regularization by gradually recovering the fine-scale structures that are usually lost in TV regularization. In this study, we studied sparse sampling image reconstruction using the bregman iteration for a low-field MRI system to improve its temporal resolution and to validate its usefulness. The image was obtained with a 0.32 T MRI scanner (Magfinder II, SCIMEDIX, Korea) with a phantom and an in-vivo human brain in a head coil. We applied random k-space sampling, and we determined the sampling ratios by using half the fully sampled k-space. The bregman iteration was used to generate the final images based on the reduced data. We also calculated the root-mean-square-error (RMSE) values from error images that were obtained using various numbers of bregman iterations. Our reconstructed images using the bregman iteration for sparse sampling images showed good results compared with the original images. Moreover, the RMSE values showed that the sparse reconstructed phantom and the human images converged to the original images. We confirmed the feasibility of sparse sampling image reconstruction methods using the bregman iteration with a low-field MRI system and obtained good results. Although our results used half the sampling ratio, this method will be helpful in increasing the temporal resolution at low-field MRI systems.
Simulation and study of small numbers of random events
NASA Technical Reports Server (NTRS)
Shelton, R. D.
1986-01-01
Random events were simulated by computer and subjected to various statistical methods to extract important parameters. Various forms of curve fitting were explored, such as least squares, least distance from a line, maximum likelihood. Problems considered were dead time, exponential decay, and spectrum extraction from cosmic ray data using binned data and data from individual events. Computer programs, mostly of an iterative nature, were developed to do these simulations and extractions and are partially listed as appendices. The mathematical basis for the compuer programs is given.
NASA Technical Reports Server (NTRS)
Ball, Danny (Technical Monitor); Pagitz, M.; Pellegrino, Xu S.
2004-01-01
This paper presents a computational study of the stability of simple lobed balloon structures. Two approaches are presented, one based on a wrinkled material model and one based on a variable Poisson s ratio model that eliminates compressive stresses iteratively. The first approach is used to investigate the stability of both a single isotensoid and a stack of four isotensoids, for perturbations of in.nitesimally small amplitude. It is found that both structures are stable for global deformation modes, but unstable for local modes at su.ciently large pressure. Both structures are stable if an isotropic model is assumed. The second approach is used to investigate the stability of the isotensoid stack for large shape perturbations, taking into account contact between di.erent surfaces. For this structure a distorted, stable configuration is found. It is also found that the volume enclosed by this con.guration is smaller than that enclosed by the undistorted structure.
Filtered gradient reconstruction algorithm for compressive spectral imaging
NASA Astrophysics Data System (ADS)
Mejia, Yuri; Arguello, Henry
2017-04-01
Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.
Iteration of ultrasound aberration correction methods
NASA Astrophysics Data System (ADS)
Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond
2004-05-01
Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.
Pigeons ("Columba Livia") Approach Nash Equilibrium in Experimental Matching Pennies Competitions
ERIC Educational Resources Information Center
Sanabria, Federico; Thrailkill, Eric
2009-01-01
The game of Matching Pennies (MP), a simplified version of the more popular Rock, Papers, Scissors, schematically represents competitions between organisms with incentives to predict each other's behavior. Optimal performance in iterated MP competitions involves the production of random choice patterns and the detection of nonrandomness in the…
Brief Instrumental School-Based Mentoring for Middle School Students: Theory and Impact
ERIC Educational Resources Information Center
McQuillin, Samuel D.; Lyons, Michael D.
2016-01-01
This study evaluated the efficacy of an intentionally brief school-based mentoring program. This academic goal-focused mentoring program was developed through a series of iterative randomized controlled trials, and is informed by research in social cognitive theory, cognitive dissonance theory, motivational interviewing, and research in academic…
Experiments on individual strategy updating in iterated snowdrift game under random rematching.
Qi, Hang; Ma, Shoufeng; Jia, Ning; Wang, Guangchao
2015-03-07
How do people actually play the iterated snowdrift games, particularly under random rematching protocol is far from well explored. Two sets of laboratory experiments on snowdrift game were conducted to investigate human strategy updating rules. Four groups of subjects were modeled by experience-weighted attraction learning theory at individual-level. Three out of the four groups (75%) passed model validation. Substantial heterogeneity is observed among the players who update their strategies in four typical types, whereas rare people behave like belief-based learners even under fixed pairing. Most subjects (63.9%) adopt the reinforcement learning (or alike) rules; but, interestingly, the performance of averaged reinforcement learners suffered. It is observed that two factors seem to benefit players in competition, i.e., the sensitivity to their recent experiences and the overall consideration of forgone payoffs. Moreover, subjects with changing opponents tend to learn faster based on their own recent experience, and display more diverse strategy updating rules than they do with fixed opponent. These findings suggest that most of subjects do apply reinforcement learning alike updating rules even under random rematching, although these rules may not improve their performance. The findings help evolutionary biology researchers to understand sophisticated human behavioral strategies in social dilemmas. Copyright © 2015 Elsevier Ltd. All rights reserved.
Plasma-surface interaction in the Be/W environment: Conclusions drawn from the JET-ILW for ITER
NASA Astrophysics Data System (ADS)
Brezinsek, S.; JET-EFDA contributors
2015-08-01
The JET ITER-Like Wall experiment (JET-ILW) provides an ideal test bed to investigate plasma-surface interaction (PSI) and plasma operation with the ITER plasma-facing material selection employing beryllium in the main chamber and tungsten in the divertor. The main PSI processes: material erosion and migration, (b) fuel recycling and retention, (c) impurity concentration and radiation have be1en studied and compared between JET-C and JET-ILW. The current physics understanding of these key processes in the JET-ILW revealed that both interpretation of previously obtained carbon results (JET-C) and predictions to ITER need to be revisited. The impact of the first-wall material on the plasma was underestimated. Main observations are: (a) low primary erosion source in H-mode plasmas and reduction of the material migration from the main chamber to the divertor (factor 7) as well as within the divertor from plasma-facing to remote areas (factor 30 - 50). The energetic threshold for beryllium sputtering minimises the primary erosion source and inhibits multi-step re-erosion in the divertor. The physical sputtering yield of tungsten is low as 10-5 and determined by beryllium ions. (b) Reduction of the long-term fuel retention (factor 10 - 20) in JET-ILW with respect to JET-C. The remaining retention is caused by implantation and co-deposition with beryllium and residual impurities. Outgassing has gained importance and impacts on the recycling properties of beryllium and tungsten. (c) The low effective plasma charge (Zeff = 1.2) and low radiation capability of beryllium reveal the bare deuterium plasma physics. Moderate nitrogen seeding, reaching Zeff = 1.6 , restores in particular the confinement and the L-H threshold behaviour. ITER-compatible divertor conditions with stable semi-detachment were obtained owing to a higher density limit with ILW. Overall JET demonstrated successful plasma operation in the Be/W material combination and confirms its advantageous PSI behaviour and gives strong support to the ITER material selection.
A path to stable low-torque plasma operation in ITER with test blanket modules
NASA Astrophysics Data System (ADS)
Lanctot, M. J.; Snipes, J. A.; Reimerdes, H.; Paz-Soldan, C.; Logan, N.; Hanson, J. M.; Buttery, R. J.; deGrassie, J. S.; Garofalo, A. M.; Gray, T. K.; Grierson, B. A.; King, J. D.; Kramer, G. J.; La Haye, R. J.; Pace, D. C.; Park, J.-K.; Salmi, A.; Shiraki, D.; Strait, E. J.; Solomon, W. M.; Tala, T.; Van Zeeland, M. A.
2017-03-01
New experiments in the low-torque ITER Q = 10 scenario on DIII-D demonstrate that n = 1 magnetic fields from a single row of ex-vessel control coils enable operation at ITER performance metrics in the presence of applied non-axisymmetric magnetic fields from a test blanket module (TBM) mock-up coil. With n = 1 compensation, operation below the ITER-equivalent injected torque is successful at three times the ITER equivalent toroidal magnetic field ripple for a pair of TBMs in one equatorial port, whereas the uncompensated TBM field leads to rotation collapse, loss of H-mode and plasma current disruption. In companion experiments at high plasma beta, where the n = 1 plasma response is enhanced, uncorrected TBM fields degrade energy confinement and the plasma angular momentum while increasing fast ion losses; however, disruptions are not routinely encountered owing to increased levels of injected neutral beam torque. In this regime, n = 1 field compensation leads to recovery of a dominant fraction of the TBM-induced plasma pressure and rotation degradation, and an 80% reduction in the heat load to the first wall. These results show that the n = 1 plasma response plays a dominant role in determining plasma stability, and that n = 1 field compensation alone not only recovers most of the impact on plasma performance of the TBM, but also protects the first wall from potentially damaging heat flux. Despite these benefits, plasma rotation braking from the TBM fields cannot be fully recovered using standard error field control. Given the uncertainty in extrapolation of these results to the ITER configuration, it is prudent to design the TBMs with as low a ferromagnetic mass as possible without jeopardizing the TBM mission.
NASA Astrophysics Data System (ADS)
Volpe, F. A.; Frassinetti, L.; Brunsell, P. R.; Drake, J. R.; Olofsson, K. E. J.
2012-10-01
A new ITER-relevant non-disruptive error field (EF) assessment technique not restricted to low density and thus low beta was demonstrated at the Extrap-T2R reversed field pinch. Resistive Wall Modes (RWMs) were generated and their rotation sustained by rotating magnetic perturbations. In particular, stable modes of toroidal mode number n=8 and 10 and unstable modes of n=1 were used in this experiment. Due to finite EFs, and in spite of the applied perturbations rotating uniformly and having constant amplitude, the RWMs were observed to rotate non-uniformly and be modulated in amplitude (in the case of unstable modes, the observed oscillation was superimposed to the mode growth). This behavior was used to infer the amplitude and toroidal phase of n=1, 8 and 10 EFs. The method was first tested against known, deliberately applied EFs, and then against actual intrinsic EFs. Applying equal and opposite corrections resulted in longer discharges and more uniform mode rotation, indicating good EF compensation. The results agree with a simple theoretical model. Extensions to tearing modes, to the non-uniform plasma response to rotating perturbations, and to tokamaks, including ITER, will be discussed.
A finite element solver for 3-D compressible viscous flows
NASA Technical Reports Server (NTRS)
Reddy, K. C.; Reddy, J. N.; Nayani, S.
1990-01-01
Computation of the flow field inside a space shuttle main engine (SSME) requires the application of state of the art computational fluid dynamic (CFD) technology. Several computer codes are under development to solve 3-D flow through the hot gas manifold. Some algorithms were designed to solve the unsteady compressible Navier-Stokes equations, either by implicit or explicit factorization methods, using several hundred or thousands of time steps to reach a steady state solution. A new iterative algorithm is being developed for the solution of the implicit finite element equations without assembling global matrices. It is an efficient iteration scheme based on a modified nonlinear Gauss-Seidel iteration with symmetric sweeps. The algorithm is analyzed for a model equation and is shown to be unconditionally stable. Results from a series of test problems are presented. The finite element code was tested for couette flow, which is flow under a pressure gradient between two parallel plates in relative motion. Another problem that was solved is viscous laminar flow over a flat plate. The general 3-D finite element code was used to compute the flow in an axisymmetric turnaround duct at low Mach numbers.
NASA Astrophysics Data System (ADS)
Liu, Zhengjun; Chen, Hang; Blondel, Walter; Shen, Zhenmin; Liu, Shutian
2018-06-01
A novel image encryption method is proposed by using the expanded fractional Fourier transform, which is implemented with a pair of lenses. Here the centers of two lenses are separated at the cross section of axis in optical system. The encryption system is addressed with Fresnel diffraction and phase modulation for the calculation of information transmission. The iterative process with the transform unit is utilized for hiding secret image. The structure parameters of a battery of lenses can be used for additional keys. The performance of encryption method is analyzed theoretically and digitally. The results show that the security of this algorithm is enhanced markedly by the added keys.
The AFLOW Standard for High-throughput Materials Science Calculations
2015-01-01
84602, USA fDepartment of Physics and Department of Chemistry, University of North Texas, Denton, TX 76203, USA gMaterials Science, Electrical ...inversion in the iterative subspace (RMM– DIIS ) [10]. Of the two, DBS is known to be the slower and more stable option. Additionally, the subspace...RMM– DIIS steps as needed to fulfill the dEelec condition. Later determinations of system forces are performed by a similar sequence, but only a single
A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Krauthammer, Prof. Michael
2010-01-01
There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manuallymore » labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use.« less
Approximated affine projection algorithm for feedback cancellation in hearing aids.
Lee, Sangmin; Kim, In-Young; Park, Young-Cheol
2007-09-01
We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.
Zhou, Wenjie; Wei, Xuesong; Wang, Leqin; Wu, Guangkuan
2017-05-01
Solving the static equilibrium position is one of the most important parts of dynamic coefficients calculation and further coupled calculation of rotor system. The main contribution of this study is testing the superlinear iteration convergence method-twofold secant method, for the determination of the static equilibrium position of journal bearing with finite length. Essentially, the Reynolds equation for stable motion is solved by the finite difference method and the inner pressure is obtained by the successive over-relaxation iterative method reinforced by the compound Simpson quadrature formula. The accuracy and efficiency of the twofold secant method are higher in comparison with the secant method and dichotomy. The total number of iterative steps required for the twofold secant method are about one-third of the secant method and less than one-eighth of dichotomy for the same equilibrium position. The calculations for equilibrium position and pressure distribution for different bearing length, clearance and rotating speed were done. In the results, the eccentricity presents linear inverse proportional relationship to the attitude angle. The influence of the bearing length, clearance and bearing radius on the load-carrying capacity was also investigated. The results illustrate that larger bearing length, larger radius and smaller clearance are good for the load-carrying capacity of journal bearing. The application of the twofold secant method can greatly reduce the computational time for calculation of the dynamic coefficients and dynamic characteristics of rotor-bearing system with a journal bearing of finite length.
Zhou, Wenjie; Wei, Xuesong; Wang, Leqin
2017-01-01
Solving the static equilibrium position is one of the most important parts of dynamic coefficients calculation and further coupled calculation of rotor system. The main contribution of this study is testing the superlinear iteration convergence method—twofold secant method, for the determination of the static equilibrium position of journal bearing with finite length. Essentially, the Reynolds equation for stable motion is solved by the finite difference method and the inner pressure is obtained by the successive over-relaxation iterative method reinforced by the compound Simpson quadrature formula. The accuracy and efficiency of the twofold secant method are higher in comparison with the secant method and dichotomy. The total number of iterative steps required for the twofold secant method are about one-third of the secant method and less than one-eighth of dichotomy for the same equilibrium position. The calculations for equilibrium position and pressure distribution for different bearing length, clearance and rotating speed were done. In the results, the eccentricity presents linear inverse proportional relationship to the attitude angle. The influence of the bearing length, clearance and bearing radius on the load-carrying capacity was also investigated. The results illustrate that larger bearing length, larger radius and smaller clearance are good for the load-carrying capacity of journal bearing. The application of the twofold secant method can greatly reduce the computational time for calculation of the dynamic coefficients and dynamic characteristics of rotor-bearing system with a journal bearing of finite length. PMID:28572997
On Adaptation, Maximization, and Reinforcement Learning among Cognitive Strategies
ERIC Educational Resources Information Center
Erev, Ido; Barron, Greg
2005-01-01
Analysis of binary choice behavior in iterated tasks with immediate feedback reveals robust deviations from maximization that can be described as indications of 3 effects: (a) a payoff variability effect, in which high payoff variability seems to move choice behavior toward random choice; (b) underweighting of rare events, in which alternatives…
ERIC Educational Resources Information Center
Castillo, Jose M.; Curtis, Michael J.; Gelley, Cheryl
2012-01-01
Every 5 years, the National Association of School Psychologists (NASP) conducts a national study of the field. Surveys are sent to randomly selected regular members of NASP to gather information on school psychologists' demographic characteristics, context for professional practices, and professional practices. The latest iteration of the national…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parrish, Robert M.; Liu, Fang; Martínez, Todd J., E-mail: toddjmartinez@gmail.com
We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this “difference self-consistent field (dSCF)” picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space.more » These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TERACHEM SCF implementation.« less
A composite step conjugate gradients squared algorithm for solving nonsymmetric linear systems
NASA Astrophysics Data System (ADS)
Chan, Tony; Szeto, Tedd
1994-03-01
We propose a new and more stable variant of the CGS method [27] for solving nonsymmetric linear systems. The method is based on squaring the Composite Step BCG method, introduced recently by Bank and Chan [1,2], which itself is a stabilized variant of BCG in that it skips over steps for which the BCG iterate is not defined and causes one kind of breakdown in BCG. By doing this, we obtain a method (Composite Step CGS or CSCGS) which not only handles the breakdowns described above, but does so with the advantages of CGS, namely, no multiplications by the transpose matrix and a faster convergence rate than BCG. Our strategy for deciding whether to skip a step does not involve any machine dependent parameters and is designed to skip near breakdowns as well as produce smoother iterates. Numerical experiments show that the new method does produce improved performance over CGS on practical problems.
Eigenproblem solution by a combined Sturm sequence and inverse iteration technique.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1973-01-01
Description of an efficient and numerically stable algorithm, along with a complete listing of the associated computer program, developed for the accurate computation of specified roots and associated vectors of the eigenvalue problem Aq = lambda Bq with band symmetric A and B, B being also positive-definite. The desired roots are first isolated by the Sturm sequence procedure; then a special variant of the inverse iteration technique is applied for the individual determination of each root along with its vector. The algorithm fully exploits the banded form of relevant matrices, and the associated program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be most significantly economical in comparison to similar existing procedures. The program may be conveniently utilized for the efficient solution of practical engineering problems, involving free vibration and buckling analysis of structures. Results of such analyses are presented for representative structures.
Hamed, Kaveh Akbari; Gregg, Robert D
2016-07-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.
Integrated prototyping environment for programmable automation
NASA Astrophysics Data System (ADS)
da Costa, Francis; Hwang, Vincent S. S.; Khosla, Pradeep K.; Lumia, Ronald
1992-11-01
We propose a rapid prototyping environment for robotic systems, based on tenets of modularity, reconfigurability and extendibility that may help build robot systems `faster, better, and cheaper.' Given a task specification, (e.g., repair brake assembly), the user browses through a library of building blocks that include both hardware and software components. Software advisors or critics recommend how blocks may be `snapped' together to speedily construct alternative ways to satisfy task requirements. Mechanisms to allow `swapping' competing modules for comparative test and evaluation studies are also included in the prototyping environment. After some iterations, a stable configuration or `wiring diagram' emerges. This customized version of the general prototyping environment still contains all the hooks needed to incorporate future improvements in component technologies and to obviate unplanned obsolescence. The prototyping environment so described is relevant for both interactive robot programming (telerobotics) and iterative robot system development (prototyping).
Hamed, Kaveh Akbari; Gregg, Robert D.
2016-01-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:27990059
Communication: A difference density picture for the self-consistent field ansatz.
Parrish, Robert M; Liu, Fang; Martínez, Todd J
2016-04-07
We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this "difference self-consistent field (dSCF)" picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space. These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TeraChem SCF implementation.
Communication: A difference density picture for the self-consistent field ansatz
NASA Astrophysics Data System (ADS)
Parrish, Robert M.; Liu, Fang; Martínez, Todd J.
2016-04-01
We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this "difference self-consistent field (dSCF)" picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space. These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TeraChem SCF implementation.
Gao, Xue; Jiang, Wei; Jiménez-Osés, Gonzalo; Choi, Moon Seok; Houk, Kendall N.; Tang, Yi; Walsh, Christopher T.
2013-01-01
The bimodular 276 kDa nonribosomal peptide synthetase AspA from Aspergillus alliaceus, heterologously expressed in Saccharomyces cerevisiae, converts tryptophan and two molecules of the aromatic β-amino acid anthranilate (Ant) into a pair of tetracyclic peptidyl alkaloids asperlicin C and D in a ratio of 10:1. The first module of AspA activates and processes two molecules of Ant iteratively to generate a tethered Ant-Ant-Trp-S-enzyme intermediate on module two. Release is postulated to involve tandem cyclizations, in which the first step is the macrocyclization of the linear tripeptidyl-S-enzyme, by the terminal condensation (CT) domain to generate the regioisomeric tetracyclic asperlicin scaffolds. Computational analysis of the transannular cyclization of the 11-membered macrocyclic intermediate shows that asperlicin C is the kinetically favored product due to the high stability of a conformation resembling the transition state for cyclization, while asperlicin D is thermodynamically more stable. PMID:23890005
Iterative updating of model error for Bayesian inversion
NASA Astrophysics Data System (ADS)
Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew
2018-02-01
In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.
Analytic method for calculating properties of random walks on networks
NASA Technical Reports Server (NTRS)
Goldhirsch, I.; Gefen, Y.
1986-01-01
A method for calculating the properties of discrete random walks on networks is presented. The method divides complex networks into simpler units whose contribution to the mean first-passage time is calculated. The simplified network is then further iterated. The method is demonstrated by calculating mean first-passage times on a segment, a segment with a single dangling bond, a segment with many dangling bonds, and a looplike structure. The results are analyzed and related to the applicability of the Einstein relation between conductance and diffusion.
Dummer, Benjamin; Wieland, Stefan; Lindner, Benjamin
2014-01-01
A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i) a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii) a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, 2000) and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide an excellent approximations to the autocorrelation of spike trains in the recurrent network.
Parallel solution of the symmetric tridiagonal eigenproblem. Research report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-10-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
Parallel solution of the symmetric tridiagonal eigenproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-01-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
Optimized random phase only holograms.
Zea, Alejandro Velez; Barrera Ramirez, John Fredy; Torroba, Roberto
2018-02-15
We propose a simple and efficient technique capable of generating Fourier phase only holograms with a reconstruction quality similar to the results obtained with the Gerchberg-Saxton (G-S) algorithm. Our proposal is to use the traditional G-S algorithm to optimize a random phase pattern for the resolution, pixel size, and target size of the general optical system without any specific amplitude data. This produces an optimized random phase (ORAP), which is used for fast generation of phase only holograms of arbitrary amplitude targets. This ORAP needs to be generated only once for a given optical system, avoiding the need for costly iterative algorithms for each new target. We show numerical and experimental results confirming the validity of the proposal.
Weak convergence to isotropic complex [Formula: see text] random measure.
Wang, Jun; Li, Yunmeng; Sang, Liheng
2017-01-01
In this paper, we prove that an isotropic complex symmetric α -stable random measure ([Formula: see text]) can be approximated by a complex process constructed by integrals based on the Poisson process with random intensity.
NASA Astrophysics Data System (ADS)
Chacon, L.; Finn, J. M.; Knoll, D. A.
2000-10-01
Recently, a new parallel velocity instability has been found.(J. M. Finn, Phys. Plasmas), 2, 12 (1995) This mode is a tearing mode driven unstable by curvature effects and sound wave coupling in the presence of parallel velocity shear. Under such conditions, linear theory predicts that tearing instabilities will grow even in situations in which the classical tearing mode is stable. This could then be a viable seed mechanism for the neoclassical tearing mode, and hence a non-linear study is of interest. Here, the linear and non-linear stages of this instability are explored using a fully implicit, fully nonlinear 2D reduced resistive MHD code,(L. Chacon et al), ``Implicit, Jacobian-free Newton-Krylov 2D reduced resistive MHD nonlinear solver,'' submitted to J. Comput. Phys. (2000) including viscosity and particle transport effects. The nonlinear implicit time integration is performed using the Newton-Raphson iterative algorithm. Krylov iterative techniques are employed for the required algebraic matrix inversions, implemented Jacobian-free (i.e., without ever forming and storing the Jacobian matrix), and preconditioned with a ``physics-based'' preconditioner. Nonlinear results indicate that, for large total plasma beta and large parallel velocity shear, the instability results in the generation of large poloidal shear flows and large magnetic islands even in regimes when the classical tearing mode is absolutely stable. For small viscosity, the time asymptotic state can be turbulent.
Search for Directed Networks by Different Random Walk Strategies
NASA Astrophysics Data System (ADS)
Zhu, Zi-Qi; Jin, Xiao-Ling; Huang, Zhi-Long
2012-03-01
A comparative study is carried out on the efficiency of five different random walk strategies searching on directed networks constructed based on several typical complex networks. Due to the difference in search efficiency of the strategies rooted in network clustering, the clustering coefficient in a random walker's eye on directed networks is defined and computed to be half of the corresponding undirected networks. The search processes are performed on the directed networks based on Erdös—Rényi model, Watts—Strogatz model, Barabási—Albert model and clustered scale-free network model. It is found that self-avoiding random walk strategy is the best search strategy for such directed networks. Compared to unrestricted random walk strategy, path-iteration-avoiding random walks can also make the search process much more efficient. However, no-triangle-loop and no-quadrangle-loop random walks do not improve the search efficiency as expected, which is different from those on undirected networks since the clustering coefficient of directed networks are smaller than that of undirected networks.
NASA Astrophysics Data System (ADS)
Kiyohara, Shin; Mizoguchi, Teruyasu
2018-03-01
Grain boundary segregation of dopants plays a crucial role in materials properties. To investigate the dopant segregation behavior at the grain boundary, an enormous number of combinations have to be considered in the segregation of multiple dopants at the complex grain boundary structures. Here, two data mining techniques, the random-forests regression and the genetic algorithm, were applied to determine stable segregation sites at grain boundaries efficiently. Using the random-forests method, a predictive model was constructed from 2% of the segregation configurations and it has been shown that this model could determine the stable segregation configurations. Furthermore, the genetic algorithm also successfully determined the most stable segregation configuration with great efficiency. We demonstrate that these approaches are quite effective to investigate the dopant segregation behaviors at grain boundaries.
Coarse mesh and one-cell block inversion based diffusion synthetic acceleration
NASA Astrophysics Data System (ADS)
Kim, Kang-Seog
DSA (Diffusion Synthetic Acceleration) has been developed to accelerate the SN transport iteration. We have developed solution techniques for the diffusion equations of FLBLD (Fully Lumped Bilinear Discontinuous), SCB (Simple Comer Balance) and UCB (Upstream Corner Balance) modified 4-step DSA in x-y geometry. Our first multi-level method includes a block Gauss-Seidel iteration for the discontinuous diffusion equation, uses the continuous diffusion equation derived from the asymptotic analysis, and avoids void cell calculation. We implemented this multi-level procedure and performed model problem calculations. The results showed that the FLBLD, SCB and UCB modified 4-step DSA schemes with this multi-level technique are unconditionally stable and rapidly convergent. We suggested a simplified multi-level technique for FLBLD, SCB and UCB modified 4-step DSA. This new procedure does not include iterations on the diffusion calculation or the residual calculation. Fourier analysis results showed that this new procedure was as rapidly convergent as conventional modified 4-step DSA. We developed new DSA procedures coupled with 1-CI (Cell Block Inversion) transport which can be easily parallelized. We showed that 1-CI based DSA schemes preceded by SI (Source Iteration) are efficient and rapidly convergent for LD (Linear Discontinuous) and LLD (Lumped Linear Discontinuous) in slab geometry and for BLD (Bilinear Discontinuous) and FLBLD in x-y geometry. For 1-CI based DSA without SI in slab geometry, the results showed that this procedure is very efficient and effective for all cases. We also showed that 1-CI based DSA in x-y geometry was not effective for thin mesh spacings, but is effective and rapidly convergent for intermediate and thick mesh spacings. We demonstrated that the diffusion equation discretized on a coarse mesh could be employed to accelerate the transport equation. Our results showed that coarse mesh DSA is unconditionally stable and is as rapidly convergent as fine mesh DSA in slab geometry. For x-y geometry our coarse mesh DSA is very effective for thin and intermediate mesh spacings independent of the scattering ratio, but is not effective for purely scattering problems and high aspect ratio zoning. However, if the scattering ratio is less than about 0.95, this procedure is very effective for all mesh spacing.
Computational methods of robust controller design for aerodynamic flutter suppression
NASA Technical Reports Server (NTRS)
Anderson, L. R.
1981-01-01
The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.
Strategies for efficient resolution analysis in full-waveform inversion
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Leeuwen, T.; Trampert, J.
2016-12-01
Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.
Davids, Jeffrey C; van de Giesen, Nick; Rutten, Martine
2017-07-01
Hydrologic data has traditionally been collected with permanent installations of sophisticated and accurate but expensive monitoring equipment at limited numbers of sites. Consequently, observation frequency and costs are high, but spatial coverage of the data is limited. Citizen Hydrology can possibly overcome these challenges by leveraging easily scaled mobile technology and local residents to collect hydrologic data at many sites. However, understanding of how decreased observational frequency impacts the accuracy of key streamflow statistics such as minimum flow, maximum flow, and runoff is limited. To evaluate this impact, we randomly selected 50 active United States Geological Survey streamflow gauges in California. We used 7 years of historical 15-min flow data from 2008 to 2014 to develop minimum flow, maximum flow, and runoff values for each gauge. To mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, and their respective distributions, from 50 subsample iterations with four different subsampling frequencies ranging from daily to monthly. Minimum flows were estimated within 10% for half of the subsample iterations at 39 (daily) and 23 (monthly) of the 50 sites. However, maximum flows were estimated within 10% at only 7 (daily) and 0 (monthly) sites. Runoff volumes were estimated within 10% for half of the iterations at 44 (daily) and 12 (monthly) sites. Watershed flashiness most strongly impacted accuracy of minimum flow, maximum flow, and runoff estimates from subsampled data. Depending on the questions being asked, lower frequency Citizen Hydrology observations can provide useful hydrologic information.
Evolutionary instability of zero-determinant strategies demonstrates that winning is not everything.
Adami, Christoph; Hintze, Arend
2013-01-01
Zero-determinant strategies are a new class of probabilistic and conditional strategies that are able to unilaterally set the expected payoff of an opponent in iterated plays of the Prisoner's Dilemma irrespective of the opponent's strategy (coercive strategies), or else to set the ratio between the player's and their opponent's expected payoff (extortionate strategies). Here we show that zero-determinant strategies are at most weakly dominant, are not evolutionarily stable, and will instead evolve into less coercive strategies. We show that zero-determinant strategies with an informational advantage over other players that allows them to recognize each other can be evolutionarily stable (and able to exploit other players). However, such an advantage is bound to be short-lived as opposing strategies evolve to counteract the recognition.
Progress of the ELISE test facility: towards one hour pulses in hydrogen
NASA Astrophysics Data System (ADS)
Wünderlich, D.; Fantz, U.; Heinemann, B.; Kraus, W.; Riedl, R.; Wimmer, C.; the NNBI Team
2016-10-01
In order to fulfil the ITER requirements, the negative hydrogen ion source used for NBI has to deliver a high source performance, i.e. a high extracted negative ion current and simultaneously a low co-extracted electron current over a pulse length up to 1 h. Negative ions will be generated by the surface process in a low-temperature low-pressure hydrogen or deuterium plasma. Therefore, a certain amount of caesium has to be deposited on the plasma grid in order to obtain a low surface work function and consequently a high negative ion production yield. This caesium is re-distributed by the influence of the plasma, resulting in temporal instabilities of the extracted negative ion current and the co-extracted electrons over long pulses. This paper describes experiments performed in hydrogen operation at the half-ITER-size NNBI test facility ELISE in order to develop a caesium conditioning technique for more stable long pulses at an ITER relevant filling pressure of 0.3 Pa. A significant improvement of the long pulse stability is achieved. Together with different plasma diagnostics it is demonstrated that this improvement is correlated to the interplay of very small variations of parameters like the electrostatic potential and the particle densities close to the extraction system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, E.W.
A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.
Density-matrix-based algorithm for solving eigenvalue problems
NASA Astrophysics Data System (ADS)
Polizzi, Eric
2009-03-01
A fast and stable numerical algorithm for solving the symmetric eigenvalue problem is presented. The technique deviates fundamentally from the traditional Krylov subspace iteration based techniques (Arnoldi and Lanczos algorithms) or other Davidson-Jacobi techniques and takes its inspiration from the contour integration and density-matrix representation in quantum mechanics. It will be shown that this algorithm—named FEAST—exhibits high efficiency, robustness, accuracy, and scalability on parallel architectures. Examples from electronic structure calculations of carbon nanotubes are presented, and numerical performances and capabilities are discussed.
A quantum relativistic battle of the sexes cellular automaton
NASA Astrophysics Data System (ADS)
Alonso-Sanz, Ramón; Situ, Haozhen
2017-02-01
The effect of variable entangling on the dynamics of a spatial quantum relativistic formulation of the iterated battle of the sexes game is studied in this work. The game is played in the cellular automata manner, i.e., with local and synchronous interaction. The game is assessed in fair and unfair contests. Despite the full range of quantum parameters initially accessible, they promptly converge into fairly stable configurations, that often show rich spatial structures in simulations with no negligible entanglement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.R. Hudson; D.A. Monticello; A.H. Reiman
For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands responsible for breaking the smooth topology of the flux surfaces are guaranteed to exist. Thus, the suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Pfirsch-Schluter currents, diamagnetic currents, and resonant coil fields contribute to the formation of magnetic islands, and the challenge is to designmore » the plasma and coils such that these effects cancel. Magnetic islands in free-boundary high-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver [Reiman and Greenside, Comp. Phys. Comm. 43 (1986) 157] which iterate s the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. The changes are constrained to preserve certain measures of engineering acceptability and to preserve the stability of ideal kink modes. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible, the plasma is stable to ideal kink modes, and the coils satisfy engineering constraints. The method is applied to a candidate plasma and coil design for the National Compact Stellarator Experiment [Reiman, et al., Phys. Plasmas 8 (May 2001) 2083].« less
NASA Astrophysics Data System (ADS)
Hudson, S. R.; Monticello, D. A.; Reiman, A. H.; Strickler, D. J.; Hirshman, S. P.; Ku, L.-P.; Lazarus, E.; Brooks, A.; Zarnstorff, M. C.; Boozer, A. H.; Fu, G.-Y.; Neilson, G. H.
2003-10-01
For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands responsible for breaking the smooth topology of the flux surfaces are guaranteed to exist. Thus, the suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Pfirsch-Schlüter currents, diamagnetic currents and resonant coil fields contribute to the formation of magnetic islands, and the challenge is to design the plasma and coils such that these effects cancel. Magnetic islands in free-boundary high-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver (Reiman and Greenside 1986 Comput. Phys. Commun. 43 157) which iterates the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. The changes are constrained to preserve certain measures of engineering acceptability and to preserve the stability of ideal kink modes. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible, the plasma is stable to ideal kink modes, and the coils satisfy engineering constraints. The method is applied to a candidate plasma and coil design for the National Compact Stellarator eXperiment (Reiman et al 2001 Phys. Plasma 8 2083).
Põder, Endel
2011-02-16
Dot lattices are very simple multi-stable images where the dots can be perceived as being grouped in different ways. The probabilities of grouping along different orientations as dependent on inter-dot distances along these orientations can be predicted by a simple quantitative model. L. Bleumers, P. De Graef, K. Verfaillie, and J. Wagemans (2008) found that for peripheral presentation, this model should be combined with random guesses on a proportion of trials. The present study shows that the probability of random responses decreases with decreasing ambiguity of lattices and is different for bi-stable and tri-stable lattices. With central presentation, similar effects can be produced by adding positional noise to the dots. The results suggest that different levels of internal positional noise might explain the differences between peripheral and central proximity grouping.
A new pivoting and iterative text detection algorithm for biomedical images.
Xu, Songhua; Krauthammer, Michael
2010-12-01
There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use. Copyright © 2010 Elsevier Inc. All rights reserved.
Holland, Alexander; Aboy, Mateo
2009-07-01
We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.
Fractal attractors in economic growth models with random pollution externalities
NASA Astrophysics Data System (ADS)
La Torre, Davide; Marsiglio, Simone; Privileggi, Fabio
2018-05-01
We analyze a discrete time two-sector economic growth model where the production technologies in the final and human capital sectors are affected by random shocks both directly (via productivity and factor shares) and indirectly (via a pollution externality). We determine the optimal dynamics in the decentralized economy and show how these dynamics can be described in terms of a two-dimensional affine iterated function system with probability. This allows us to identify a suitable parameter configuration capable of generating exactly the classical Barnsley's fern as the attractor of the log-linearized optimal dynamical system.
NASA Astrophysics Data System (ADS)
La Torre, Davide; Marsiglio, Simone; Mendivil, Franklin; Privileggi, Fabio
2018-05-01
We analyze a multi-sector growth model subject to random shocks affecting the two sector-specific production functions twofold: the evolution of both productivity and factor shares is the result of such exogenous shocks. We determine the optimal dynamics via Euler-Lagrange equations, and show how these dynamics can be described in terms of an iterated function system with probability. We also provide conditions that imply the singularity of the invariant measure associated with the fractal attractor. Numerical examples show how specific parameter configurations might generate distorted copies of the Barnsley's fern attractor.
Adaptive consensus of scale-free multi-agent system by randomly selecting links
NASA Astrophysics Data System (ADS)
Mou, Jinping; Ge, Huafeng
2016-06-01
This paper investigates an adaptive consensus problem for distributed scale-free multi-agent systems (SFMASs) by randomly selecting links, where the degree of each node follows a power-law distribution. The randomly selecting links are based on the assumption that every agent decides to select links among its neighbours according to the received data with a certain probability. Accordingly, a novel consensus protocol with the range of the received data is developed, and each node updates its state according to the protocol. By the iterative method and Cauchy inequality, the theoretical analysis shows that all errors among agents converge to zero, and in the meanwhile, several criteria of consensus are obtained. One numerical example shows the reliability of the proposed methods.
Deformed Palmprint Matching Based on Stable Regions.
Wu, Xiangqian; Zhao, Qiushi
2015-12-01
Palmprint recognition (PR) is an effective technology for personal recognition. A main problem, which deteriorates the performance of PR, is the deformations of palmprint images. This problem becomes more severe on contactless occasions, in which images are acquired without any guiding mechanisms, and hence critically limits the applications of PR. To solve the deformation problems, in this paper, a model for non-linearly deformed palmprint matching is derived by approximating non-linear deformed palmprint images with piecewise-linear deformed stable regions. Based on this model, a novel approach for deformed palmprint matching, named key point-based block growing (KPBG), is proposed. In KPBG, an iterative M-estimator sample consensus algorithm based on scale invariant feature transform features is devised to compute piecewise-linear transformations to approximate the non-linear deformations of palmprints, and then, the stable regions complying with the linear transformations are decided using a block growing algorithm. Palmprint feature extraction and matching are performed over these stable regions to compute matching scores for decision. Experiments on several public palmprint databases show that the proposed models and the KPBG approach can effectively solve the deformation problem in palmprint verification and outperform the state-of-the-art methods.
The feasibility and stability of large complex biological networks: a random matrix approach.
Stone, Lewi
2018-05-29
In the 70's, Robert May demonstrated that complexity creates instability in generic models of ecological networks having random interaction matrices A. Similar random matrix models have since been applied in many disciplines. Central to assessing stability is the "circular law" since it describes the eigenvalue distribution for an important class of random matrices A. However, despite widespread adoption, the "circular law" does not apply for ecological systems in which density-dependence operates (i.e., where a species growth is determined by its density). Instead one needs to study the far more complicated eigenvalue distribution of the community matrix S = DA, where D is a diagonal matrix of population equilibrium values. Here we obtain this eigenvalue distribution. We show that if the random matrix A is locally stable, the community matrix S = DA will also be locally stable, providing the system is feasible (i.e., all species have positive equilibria D > 0). This helps explain why, unusually, nearly all feasible systems studied here are locally stable. Large complex systems may thus be even more fragile than May predicted, given the difficulty of assembling a feasible system. It was also found that the degree of stability, or resilience of a system, depended on the minimum equilibrium population.
Crack Modelling for Radiography
NASA Astrophysics Data System (ADS)
Chady, T.; Napierała, L.
2010-02-01
In this paper, possibility of creation of three-dimensional crack models, both random type and based on real-life radiographic images is discussed. Method for storing cracks in a number of two-dimensional matrices, as well algorithm for their reconstruction into three-dimensional objects is presented. Also the possibility of using iterative algorithm for matching simulated images of cracks to real-life radiographic images is discussed.
Combinational Circuit Obfuscation Through Power Signature Manipulation
2011-06-01
Algorithm produced by SID . . . . . . . . . . . . . . . . . . . . . . 80 Appendix B . Power Signature Estimation Results 2 . . . . . . . . . . 85 B .1 Power...Signature for c264 Circuit Variant per Algorithm produced by SPICE Simulation . . . . . . . . . . . . . . 85 B .2 Power Signature for c5355 and c499...Smart SSR selecting rear level components and gates with 1000 iterations . . . . . . . . . 84 B .1. Power Signature for c264 By Random Sequence
NASA Astrophysics Data System (ADS)
Shi, Aiye; Wang, Chao; Shen, Shaohong; Huang, Fengchen; Ma, Zhenli
2016-10-01
Chi-squared transform (CST), as a statistical method, can describe the difference degree between vectors. The CST-based methods operate directly on information stored in the difference image and are simple and effective methods for detecting changes in remotely sensed images that have been registered and aligned. However, the technique does not take spatial information into consideration, which leads to much noise in the result of change detection. An improved unsupervised change detection method is proposed based on spatial constraint CST (SCCST) in combination with a Markov random field (MRF) model. First, the mean and variance matrix of the difference image of bitemporal images are estimated by an iterative trimming method. In each iteration, spatial information is injected to reduce scattered changed points (also known as "salt and pepper" noise). To determine the key parameter confidence level in the SCCST method, a pseudotraining dataset is constructed to estimate the optimal value. Then, the result of SCCST, as an initial solution of change detection, is further improved by the MRF model. The experiments on simulated and real multitemporal and multispectral images indicate that the proposed method performs well in comprehensive indices compared with other methods.
Cuevas, Erik; Díaz, Margarita
2015-01-01
In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.
An Iterative Method for Problems with Multiscale Conductivity
Kim, Hyea Hyun; Minhas, Atul S.; Woo, Eung Je
2012-01-01
A model with its conductivity varying highly across a very thin layer will be considered. It is related to a stable phantom model, which is invented to generate a certain apparent conductivity inside a region surrounded by a thin cylinder with holes. The thin cylinder is an insulator and both inside and outside the thin cylinderare filled with the same saline. The injected current can enter only through the holes adopted to the thin cylinder. The model has a high contrast of conductivity discontinuity across the thin cylinder and the thickness of the layer and the size of holes are very small compared to the domain of the model problem. Numerical methods for such a model require a very fine mesh near the thin layer to resolve the conductivity discontinuity. In this work, an efficient numerical method for such a model problem is proposed by employing a uniform mesh, which need not resolve the conductivity discontinuity. The discrete problem is then solved by an iterative method, where the solution is improved by solving a simple discrete problem with a uniform conductivity. At each iteration, the right-hand side is updated by integrating the previous iterate over the thin cylinder. This process results in a certain smoothing effect on microscopic structures and our discrete model can provide a more practical tool for simulating the apparent conductivity. The convergence of the iterative method is analyzed regarding the contrast in the conductivity and the relative thickness of the layer. In numerical experiments, solutions of our method are compared to reference solutions obtained from COMSOL, where very fine meshes are used to resolve the conductivity discontinuity in the model. Errors of the voltage in L2 norm follow O(h) asymptotically and the current density matches quitewell those from the reference solution for a sufficiently small mesh size h. The experimental results present a promising feature of our approach for simulating the apparent conductivity related to changes in microscopic cellular structures. PMID:23304238
Error analysis in inverse scatterometry. I. Modeling.
Al-Assaad, Rayan M; Byrne, Dale M
2007-02-01
Scatterometry is an optical technique that has been studied and tested in recent years in semiconductor fabrication metrology for critical dimensions. Previous work presented an iterative linearized method to retrieve surface-relief profile parameters from reflectance measurements upon diffraction. With the iterative linear solution model in this work, rigorous models are developed to represent the random and deterministic or offset errors in scatterometric measurements. The propagation of different types of error from the measurement data to the profile parameter estimates is then presented. The improvement in solution accuracies is then demonstrated with theoretical and experimental data by adjusting for the offset errors. In a companion paper (in process) an improved optimization method is presented to account for unknown offset errors in the measurements based on the offset error model.
Analysis of k-means clustering approach on the breast cancer Wisconsin dataset.
Dubey, Ashutosh Kumar; Gupta, Umesh; Jain, Sonal
2016-11-01
Breast cancer is one of the most common cancers found worldwide and most frequently found in women. An early detection of breast cancer provides the possibility of its cure; therefore, a large number of studies are currently going on to identify methods that can detect breast cancer in its early stages. This study was aimed to find the effects of k-means clustering algorithm with different computation measures like centroid, distance, split method, epoch, attribute, and iteration and to carefully consider and identify the combination of measures that has potential of highly accurate clustering accuracy. K-means algorithm was used to evaluate the impact of clustering using centroid initialization, distance measures, and split methods. The experiments were performed using breast cancer Wisconsin (BCW) diagnostic dataset. Foggy and random centroids were used for the centroid initialization. In foggy centroid, based on random values, the first centroid was calculated. For random centroid, the initial centroid was considered as (0, 0). The results were obtained by employing k-means algorithm and are discussed with different cases considering variable parameters. The calculations were based on the centroid (foggy/random), distance (Euclidean/Manhattan/Pearson), split (simple/variance), threshold (constant epoch/same centroid), attribute (2-9), and iteration (4-10). Approximately, 92 % average positive prediction accuracy was obtained with this approach. Better results were found for the same centroid and the highest variance. The results achieved using Euclidean and Manhattan were better than the Pearson correlation. The findings of this work provided extensive understanding of the computational parameters that can be used with k-means. The results indicated that k-means has a potential to classify BCW dataset.
Randomized Dynamic Mode Decomposition
NASA Astrophysics Data System (ADS)
Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan
2017-11-01
The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.
Universal single level implicit algorithm for gasdynamics
NASA Technical Reports Server (NTRS)
Lombard, C. K.; Venkatapthy, E.
1984-01-01
A single level effectively explicit implicit algorithm for gasdynamics is presented. The method meets all the requirements for unconditionally stable global iteration over flows with mixed supersonic and supersonic zones including blunt body flow and boundary layer flows with strong interaction and streamwise separation. For hyperbolic (supersonic flow) regions the method is automatically equivalent to contemporary space marching methods. For elliptic (subsonic flow) regions, rapid convergence is facilitated by alternating direction solution sweeps which bring both sets of eigenvectors and the influence of both boundaries of a coordinate line equally into play. Point by point updating of the data with local iteration on the solution procedure at each spatial step as the sweeps progress not only renders the method single level in storage but, also, improves nonlinear accuracy to accelerate convergence by an order of magnitude over related two level linearized implicit methods. The method derives robust stability from the combination of an eigenvector split upwind difference method (CSCM) with diagonally dominant ADI(DDADI) approximate factorization and computed characteristic boundary approximations.
Numerical form-finding method for large mesh reflectors with elastic rim trusses
NASA Astrophysics Data System (ADS)
Yang, Dongwu; Zhang, Yiqun; Li, Peng; Du, Jingli
2018-06-01
Traditional methods for designing a mesh reflector usually treat the rim truss as rigid. Due to large aperture, light weight and high accuracy requirements on spaceborne reflectors, the rim truss deformation is indeed not negligible. In order to design a cable net with asymmetric boundaries for the front and rear nets, a cable-net form-finding method is firstly introduced. Then, the form-finding method is embedded into an iterative approach for designing a mesh reflector considering the elasticity of the supporting rim truss. By iterations on form-findings of the cable-net based on the updated boundary conditions due to the rim truss deformation, a mesh reflector with a fairly uniform tension distribution in its equilibrium state could be finally designed. Applications on offset mesh reflectors with both circular and elliptical rim trusses are illustrated. The numerical results show the effectiveness of the proposed approach and that a circular rim truss is more stable than an elliptical rim truss.
Near-optimal matrix recovery from random linear measurements.
Romanov, Elad; Gavish, Matan
2018-06-25
In matrix recovery from random linear measurements, one is interested in recovering an unknown M-by-N matrix [Formula: see text] from [Formula: see text] measurements [Formula: see text], where each [Formula: see text] is an M-by-N measurement matrix with i.i.d. random entries, [Formula: see text] We present a matrix recovery algorithm, based on approximate message passing, which iteratively applies an optimal singular-value shrinker-a nonconvex nonlinearity tailored specifically for matrix estimation. Our algorithm typically converges exponentially fast, offering a significant speedup over previously suggested matrix recovery algorithms, such as iterative solvers for nuclear norm minimization (NNM). It is well known that there is a recovery tradeoff between the information content of the object [Formula: see text] to be recovered (specifically, its matrix rank r) and the number of linear measurements n from which recovery is to be attempted. The precise tradeoff between r and n, beyond which recovery by a given algorithm becomes possible, traces the so-called phase transition curve of that algorithm in the [Formula: see text] plane. The phase transition curve of our algorithm is noticeably better than that of NNM. Interestingly, it is close to the information-theoretic lower bound for the minimal number of measurements needed for matrix recovery, making it not only state of the art in terms of convergence rate, but also near optimal in terms of the matrices it successfully recovers. Copyright © 2018 the Author(s). Published by PNAS.
Tsuruta, S; Misztal, I; Strandén, I
2001-05-01
Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.
Noise-enhanced convolutional neural networks.
Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart
2016-06-01
Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Albert, L.; Rottensteiner, F.; Heipke, C.
2015-08-01
Land cover and land use exhibit strong contextual dependencies. We propose a novel approach for the simultaneous classification of land cover and land use, where semantic and spatial context is considered. The image sites for land cover and land use classification form a hierarchy consisting of two layers: a land cover layer and a land use layer. We apply Conditional Random Fields (CRF) at both layers. The layers differ with respect to the image entities corresponding to the nodes, the employed features and the classes to be distinguished. In the land cover layer, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Both CRFs model spatial dependencies between neighbouring image sites. The complex semantic relations between land cover and land use are integrated in the classification process by using contextual features. We propose a new iterative inference procedure for the simultaneous classification of land cover and land use, in which the two classification tasks mutually influence each other. This helps to improve the classification accuracy for certain classes. The main idea of this approach is that semantic context helps to refine the class predictions, which, in turn, leads to more expressive context information. Thus, potentially wrong decisions can be reversed at later stages. The approach is designed for input data based on aerial images. Experiments are carried out on a test site to evaluate the performance of the proposed method. We show the effectiveness of the iterative inference procedure and demonstrate that a smaller size of the super-pixels has a positive influence on the classification result.
Concurrent design of quasi-random photonic nanostructures
Lee, Won-Kyu; Yu, Shuangcheng; Engel, Clifford J.; Reese, Thaddeus; Rhee, Dongjoon; Chen, Wei
2017-01-01
Nanostructured surfaces with quasi-random geometries can manipulate light over broadband wavelengths and wide ranges of angles. Optimization and realization of stochastic patterns have typically relied on serial, direct-write fabrication methods combined with real-space design. However, this approach is not suitable for customizable features or scalable nanomanufacturing. Moreover, trial-and-error processing cannot guarantee fabrication feasibility because processing–structure relations are not included in conventional designs. Here, we report wrinkle lithography integrated with concurrent design to produce quasi-random nanostructures in amorphous silicon at wafer scales that achieved over 160% light absorption enhancement from 800 to 1,200 nm. The quasi-periodicity of patterns, materials filling ratio, and feature depths could be independently controlled. We statistically represented the quasi-random patterns by Fourier spectral density functions (SDFs) that could bridge the processing–structure and structure–performance relations. Iterative search of the optimal structure via the SDF representation enabled concurrent design of nanostructures and processing. PMID:28760975
Moussa, Ahmed; Loye, Nathalie; Charlin, Bernard; Audétat, Marie-Claude
2016-01-01
Background Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. Objective The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. Methods This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. Results This study is in its preliminary stages and the results are expected to be made available by April, 2016. Conclusions This will be the first study to explore a new simulation approach designed to enhance clinical reasoning. By assessing more closely reasoning processes throughout a simulation session, we believe that Simulation with Iterative Discussions will be an interesting and more effective approach for students. The findings of the study will benefit medical educators, education programs, and medical students. PMID:26888076
Pennaforte, Thomas; Moussa, Ahmed; Loye, Nathalie; Charlin, Bernard; Audétat, Marie-Claude
2016-02-17
Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. This study is in its preliminary stages and the results are expected to be made available by April, 2016. This will be the first study to explore a new simulation approach designed to enhance clinical reasoning. By assessing more closely reasoning processes throughout a simulation session, we believe that Simulation with Iterative Discussions will be an interesting and more effective approach for students. The findings of the study will benefit medical educators, education programs, and medical students.
Virtual reality cataract surgery training: learning curves and concurrent validity.
Selvander, Madeleine; Åsman, Peter
2012-08-01
To investigate initial learning curves on a virtual reality (VR) eye surgery simulator and whether achieved skills are transferable between tasks. Thirty-five medical students were randomized to complete ten iterations on either the VR Caspulorhexis module (group A) or the Cataract navigation training module (group B) and then two iterations on the other module. Learning curves were compared between groups. The second Capsulorhexis video was saved and evaluated with the performance rating tool Objective Structured Assessment of Cataract Surgical Skill (OSACSS). The students' stereoacuity was examined. Both groups demonstrated significant improvements in performance over the 10 iterations: group A for all parameters analysed including score (p < 0.0001), time (p < 0.0001) and corneal damage (p = 0.0003), group B for time (p < 0.0001), corneal damage (p < 0.0001) but not for score (p = 0.752). Training on one module did not improve performance on the other. Capsulorhexis score correlated significantly with evaluation of the videos using the OSACSS performance rating tool. For stereoacuity < and ≥120 seconds of arc, sum of both modules' second iteration score was 73.5 and 41.0, respectively (p = 0.062). An initial rapid improvement in performance on a simulator with repeated practice was shown. For capsulorhexis, 10 iterations with only simulator feedback are not enough to reach a plateau for overall score. Skills transfer between modules was not found suggesting benefits from training on both modules. Stereoacuity may be of importance in the recruitment and training of new cataract surgeons. Additional studies are needed to investigate this further. Concurrent validity was found for Capsulorhexis module. © 2010 The Authors. Acta Ophthalmologica © 2010 Acta Ophthalmologica Scandinavica Foundation.
Cooperation for volunteering and partially random partnerships
NASA Astrophysics Data System (ADS)
Szabó, György; Vukov, Jeromos
2004-03-01
Competition among cooperative, defective, and loner strategies is studied by considering an evolutionary prisoner’s dilemma game for different partnerships. In this game each player can adopt one of its coplayer’s strategy with a probability depending on the difference of payoffs coming from games with the corresponding coplayers. Our attention is focused on the effects of annealed and quenched randomness in the partnership for fixed number of coplayers. It is shown that only the loners survive if the four coplayers are chosen randomly (mean-field limit). On the contrary, on the square lattice all the three strategies are maintained by the cyclic invasions resulting in a self-organizing spatial pattern. If the fixed partnership is described by a regular small-world structure then a homogeneous oscillation occurs in the population dynamics when the measure of quenched randomness exceeds a threshold value. Similar behavior with higher sensitivity to the randomness is found if temporary partners are substituted for the standard ones with some probability at each step of iteration.
Random unitary evolution model of quantum Darwinism with pure decoherence
NASA Astrophysics Data System (ADS)
Balanesković, Nenad
2015-10-01
We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.
Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis
LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK
2017-01-01
Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138
Video encryption using chaotic masks in joint transform correlator
NASA Astrophysics Data System (ADS)
Saini, Nirmala; Sinha, Aloka
2015-03-01
A real-time optical video encryption technique using a chaotic map has been reported. In the proposed technique, each frame of video is encrypted using two different chaotic random phase masks in the joint transform correlator architecture. The different chaotic random phase masks can be obtained either by using different iteration levels or by using different seed values of the chaotic map. The use of different chaotic random phase masks makes the decryption process very complex for an unauthorized person. Optical, as well as digital, methods can be used for video encryption but the decryption is possible only digitally. To further enhance the security of the system, the key parameters of the chaotic map are encoded using RSA (Rivest-Shamir-Adleman) public key encryption. Numerical simulations are carried out to validate the proposed technique.
Telecommunications media for the delivery of educational programming
NASA Technical Reports Server (NTRS)
Ballard, R.; Eastwood, L. F., Jr.
1974-01-01
The technical characteristics of various telecommunications media are examined for incorporation into educational networks. FM radio, AM radio, and VHF and UHF television are considered along with computer-aided instruction. The application of iteration networks to library systems, and microform technology are discussed. The basic principles of the communications theory are outlined, and the operation of the PLATO 4 random access system is described.
ERIC Educational Resources Information Center
Shernoff, Elisa S.; Lekwa, Adam J.; Reddy, Linda A.; Coccaro, Candace
2017-01-01
The purpose of this qualitative study was to examine teachers' attitudes and experiences with coaching. This study was conducted in advance of a planned randomized controlled trial of a coaching intervention to better align the model with teachers' needs and goals. Thirty-four K-5 general (n = 26), special education (n = 6), and educational…
Robust High Data Rate MIMO Underwater Acoustic Communications
2011-09-30
We solved it via exploiting FFTs. The extended CAN algorithm is referred to as periodic CAN ( PeCAN ). Unlike most existing sequence construction...methods which are algebraic and deterministic in nature, we start the iteration of PeCAN from random phase initializations and then proceed to...covert UAC applications. We will use PeCAN sequences for more in-water experimentations to demonstrate their effectiveness. Temporal Resampling: In
Operational Evaluation of Self-Paced Instruction in U.S. Army Training.
1979-01-01
one iteration of each course, and the on -going refinement and adjustment of managerial techniques. Research Approach A quasi - experimental approach was...research design employed experimental and control groups , posttest only with non-random groups . The design dealt with the six major areas identified as...course on Interpersonal Communications were conducted in the conventional, group -paced manner. Experimental course materials. Wherever possible, existing
Rotating and binary relativistic stars with magnetic field
NASA Astrophysics Data System (ADS)
Markakis, Charalampos
We develop a geometrical treatment of general relativistic magnetohydrodynamics for perfectly conducting fluids in Einstein--Maxwell--Euler spacetimes. The theory is applied to describe a neutron star that is rotating or is orbiting a black hole or another neutron star. Under the hypotheses of stationarity and axisymmetry, we obtain the equations governing magnetohydrodynamic equilibria of rotating neutron stars with poloidal, toroidal or mixed magnetic fields. Under the hypothesis of an approximate helical symmetry, we obtain the first law of thermodynamics governing magnetized equilibria of double neutron star or black hole - neutron star systems in close circular orbits. The first law is written as a relation between the change in the asymptotic Noether charge deltaQ and the changes in the area and electric charge of black holes, and in the vorticity, baryon rest mass, entropy, charge and magnetic flux of the magnetofluid. In an attempt to provide a better theoretical understanding of the methods used to construct models of isolated rotating stars and corotating or irrotational binaries and their unexplained convergence properties, we analytically examine the behavior of different iterative schemes near a static solution. We find the spectrum of the linearized iteration operator and show for self-consistent field methods that iterative instability corresponds to unstable modes of this operator. On the other hand, we show that the success of iteratively stable methods is due to (quasi-)nilpotency of this operator. Finally, we examine the integrability of motion of test particles in a stationary axisymmetric gravitational field. We use a direct approach to seek nontrivial constants of motion polynomial in the momenta---in addition to energy and angular momentum about the symmetry axis. We establish the existence and uniqueness of quadratic constants and the nonexistence of quartic constants for stationary axisymmetric Newtonian potentials with equatorial symmetry and elucidate their relativistic analogues.
Extending the physics basis of quiescent H-mode toward ITER relevant parameters
Solomon, W. M.; Burrell, K. H.; Fenstermacher, M. E.; ...
2015-06-26
Recent experiments on DIII-D have addressed several long-standing issues needed to establish quiescent H-mode (QH-mode) as a viable operating scenario for ITER. In the past, QH-mode was associated with low density operation, but has now been extended to high normalized densities compatible with operation envisioned for ITER. Through the use of strong shaping, QH-mode plasmas have been maintained at high densities, both absolute (more » $$\\bar{n}$$ e ≈ 7 × 10 19 m ₋3) and normalized Greenwald fraction ($$\\bar{n}$$ e/n G > 0.7). In these plasmas, the pedestal can evolve to very high pressure and edge current as the density is increased. High density QH-mode operation with strong shaping has allowed access to a previously predicted regime of very high pedestal dubbed “Super H-mode”. Calculations of the pedestal height and width from the EPED model are quantitatively consistent with the experimentally observed density evolution. The confirmation of the shape dependence of the maximum density threshold for QH-mode helps validate the underlying theoretical model of peeling- ballooning modes for ELM stability. In general, QH-mode is found to achieve ELM- stable operation while maintaining adequate impurity exhaust, due to the enhanced impurity transport from an edge harmonic oscillation, thought to be a saturated kink- peeling mode driven by rotation shear. In addition, the impurity confinement time is not affected by rotation, even though the energy confinement time and measured E×B shear are observed to increase at low toroidal rotation. Together with demonstrations of high beta, high confinement and low q 95 for many energy confinement times, these results suggest QH-mode as a potentially attractive operating scenario for the ITER Q=10 mission.« less
Magnetohydrodynamic stability at a separatrix. I. Toroidal peeling modes and the energy principle
NASA Astrophysics Data System (ADS)
Webster, A. J.; Gimblett, C. G.
2009-08-01
A potentially serious impediment to the production of energy by nuclear fusion in large tokamaks, such as ITER [R. Aymar, V. A. Chuyanov, M. Huguet, Y. Shimomura, ITER Joint Central Team, and ITER Home Teams, Nucl. Fusion 41, 1301 (2001)] and DEMO [D. Maisonner, I. Cook, S. Pierre, B. Lorenzo, D. Luigi, G. Luciano, N. Prachai, and P. Aldo, Fusion Eng. Des. 81, 1123 (2006)], is the potential for rapid deposition of energy onto plasma facing components by edge localized modes (ELMs). The trigger for ELMs is believed to be the ideal magnetohydrodynamic peeling-ballooning instability, but recent numerical calculations have suggested that a plasma equilibrium with an X-point—as is found in all ITER-like tokamaks, is stable to the peeling mode. This contrasts with analytical calculations [G. Laval, R. Pellat, and J. S. Soule, Phys. Fluids 17, 835 (1974)] that found the peeling mode to be unstable in cylindrical plasmas with arbitrary cross-sectional shape. Here, we re-examine the assumptions made in cylindrical geometry calculations and generalize the calculation to an arbitrary tokamak geometry at marginal stability. The resulting equations solely describe the peeling mode and are not complicated by coupling to the ballooning mode, for example. We find that stability is determined by the value of a single parameter Δ' that is the poloidal average of the normalized jump in the radial derivative of the perturbed magnetic field's normal component. We also find that near a separatrix it is possible for the energy principle's δW to be negative (that is usually taken to indicate that the mode is unstable, as in the cylindrical theory), but the growth rate to be arbitrarily small.
Nonlinear Fatigue Damage Model Based on the Residual Strength Degradation Law
NASA Astrophysics Data System (ADS)
Yongyi, Gao; Zhixiao, Su
In this paper, a logarithmic expression to describe the residual strength degradation process is developed in order to fatigue test results for normalized carbon steel. The definition and expression of fatigue damage due to symmetrical stress with a constant amplitude are also given. The expression of fatigue damage can also explain the nonlinear properties of fatigue damage. Furthermore, the fatigue damage of structures under random stress is analyzed, and an iterative formula to describe the fatigue damage process is deduced. Finally, an approximate method for evaluating the fatigue life of structures under repeated random stress blocking is presented through various calculation examples.
NASA Astrophysics Data System (ADS)
Monthus, Cécile
2018-03-01
For the many-body-localized phase of random Majorana models, a general strong disorder real-space renormalization procedure known as RSRG-X (Pekker et al 2014 Phys. Rev. X 4 011052) is described to produce the whole set of excited states, via the iterative construction of the local integrals of motion (LIOMs). The RG rules are then explicitly derived for arbitrary quadratic Hamiltonians (free-fermions models) and for the Kitaev chain with local interactions involving even numbers of consecutive Majorana fermions. The emphasis is put on the advantages of the Majorana language over the usual quantum spin language to formulate unified RSRG-X rules.
Selection dynamic of Escherichia coli host in M13 combinatorial peptide phage display libraries.
Zanconato, Stefano; Minervini, Giovanni; Poli, Irene; De Lucrezia, Davide
2011-01-01
Phage display relies on an iterative cycle of selection and amplification of random combinatorial libraries to enrich the initial population of those peptides that satisfy a priori chosen criteria. The effectiveness of any phage display protocol depends directly on library amino acid sequence diversity and the strength of the selection procedure. In this study we monitored the dynamics of the selective pressure exerted by the host organism on a random peptide library in the absence of any additional selection pressure. The results indicate that sequence censorship exerted by Escherichia coli dramatically reduces library diversity and can significantly impair phage display effectiveness.
Robust local search for spacecraft operations using adaptive noise
NASA Technical Reports Server (NTRS)
Fukunaga, Alex S.; Rabideau, Gregg; Chien, Steve
2004-01-01
Randomization is a standard technique for improving the performance of local search algorithms for constraint satisfaction. However, it is well-known that local search algorithms are constraints satisfaction. However, it is well-known that local search algorithms are to the noise values selected. We investigate the use of an adaptive noise mechanism in an iterative repair-based planner/scheduler for spacecraft operations. Preliminary results indicate that adaptive noise makes the use of randomized repair moves safe and robust; that is, using adaptive noise makes it possible to consistently achieve, performance comparable with the best tuned noise setting without the need for manually tuning the noise parameter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berco, Dan, E-mail: danny.barkan@gmail.com; Tseng, Tseung-Yuen, E-mail: tseng@cc.nctu.edu.tw
This study presents an evaluation method for resistive random access memory retention reliability based on the Metropolis Monte Carlo algorithm and Gibbs free energy. The method, which does not rely on a time evolution, provides an extremely efficient way to compare the relative retention properties of metal-insulator-metal structures. It requires a small number of iterations and may be used for statistical analysis. The presented approach is used to compare the relative robustness of a single layer ZrO{sub 2} device with a double layer ZnO/ZrO{sub 2} one, and obtain results which are in good agreement with experimental data.
Single realization stochastic FDTD for weak scattering waves in biological random media.
Tan, Tengmeng; Taflove, Allen; Backman, Vadim
2013-02-01
This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result.
Single realization stochastic FDTD for weak scattering waves in biological random media
Tan, Tengmeng; Taflove, Allen; Backman, Vadim
2015-01-01
This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result. PMID:27158153
NASA Astrophysics Data System (ADS)
Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho
2015-01-01
Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.
Invariants, Attractors and Bifurcation in Two Dimensional Maps with Polynomial Interaction
NASA Astrophysics Data System (ADS)
Hacinliyan, Avadis Simon; Aybar, Orhan Ozgur; Aybar, Ilknur Kusbeyzi
This work will present an extended discrete-time analysis on maps and their generalizations including iteration in order to better understand the resulting enrichment of the bifurcation properties. The standard concepts of stability analysis and bifurcation theory for maps will be used. Both iterated maps and flows are used as models for chaotic behavior. It is well known that when flows are converted to maps by discretization, the equilibrium points remain the same but a richer bifurcation scheme is observed. For example, the logistic map has a very simple behavior as a differential equation but as a map fold and period doubling bifurcations are observed. A way to gain information about the global structure of the state space of a dynamical system is investigating invariant manifolds of saddle equilibrium points. Studying the intersections of the stable and unstable manifolds are essential for understanding the structure of a dynamical system. It has been known that the Lotka-Volterra map and systems that can be reduced to it or its generalizations in special cases involving local and polynomial interactions admit invariant manifolds. Bifurcation analysis of this map and its higher iterates can be done to understand the global structure of the system and the artifacts of the discretization by comparing with the corresponding results from the differential equation on which they are based.
Determination and Control of Optical and X-Ray Wave Fronts
NASA Technical Reports Server (NTRS)
Kim, Young K.
1997-01-01
A successful design of a space-based or ground optical system requires an iterative procedure which includes the kinematics and dynamics of the system in operating environment, control synthesis and verification. To facilitate the task of designing optical wave front control systems being developed at NASA/MSFC, a multi-discipline dynamics and control tool has been developed by utilizing TREETOPS, a multi-body dynamics and control simulation, NASTRAN and MATLAB. Dynamics and control models of STABLE and ARIS were developed for TREETOPS simulation, and their simulation results are documented in this report.
New discretization and solution techniques for incompressible viscous flow problems
NASA Technical Reports Server (NTRS)
Gunzburger, M. D.; Nicolaides, R. A.; Liu, C. H.
1983-01-01
Several topics arising in the finite element solution of the incompressible Navier-Stokes equations are considered. Specifically, the question of choosing finite element velocity/pressure spaces is addressed, particularly from the viewpoint of achieving stable discretizations leading to convergent pressure approximations. The role of artificial viscosity in viscous flow calculations is studied, emphasizing work by several researchers for the anisotropic case. The last section treats the problem of solving the nonlinear systems of equations which arise from the discretization. Time marching methods and classical iterative techniques, as well as some modifications are mentioned.
NASA Astrophysics Data System (ADS)
Masalmah, Yahya M.; Vélez-Reyes, Miguel
2007-04-01
The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.
Zhang, Jing; Song, Yuan-Lin; Bai, Chun-Xue
2013-01-01
Chronic obstructive pulmonary disease (COPD) is a common disease that leads to huge economic and social burden. Efficient and effective management of stable COPD is essential to improve quality of life and reduce medical expenditure. The Internet of Things (IoT), a recent breakthrough in communication technology, seems promising in improving health care delivery, but its potential strengths in COPD management remain poorly understood. We have developed a mobile phone-based IoT (mIoT) platform and initiated a randomized, multicenter, controlled trial entitled the 'MIOTIC study' to investigate the influence of mIoT among stable COPD patients. In the MIOTIC study, at least 600 patients with stable GOLD group C or D COPD and with a history of at least two moderate-to-severe exacerbations within the previous year will be randomly allocated to the control group, which receives routine follow-up, or the intervention group, which receives mIoT management. Endpoints of the study include (1) frequency and severity of acute exacerbation; (2) symptomatic evaluation; (3) pre- and post-bronchodilator forced expiratory volume in 1 second (FEV1) and FEV1/forced vital capacity (FVC) measurement; (4) exercise capacity; and (5) direct medical cost per year. Results from this study should provide direct evidence for the suitability of mIoT in stable COPD patient management.
Zhang, Jing; Song, Yuan-lin; Bai, Chun-xue
2013-01-01
Chronic obstructive pulmonary disease (COPD) is a common disease that leads to huge economic and social burden. Efficient and effective management of stable COPD is essential to improve quality of life and reduce medical expenditure. The Internet of Things (IoT), a recent breakthrough in communication technology, seems promising in improving health care delivery, but its potential strengths in COPD management remain poorly understood. We have developed a mobile phone-based IoT (mIoT) platform and initiated a randomized, multicenter, controlled trial entitled the ‘MIOTIC study’ to investigate the influence of mIoT among stable COPD patients. In the MIOTIC study, at least 600 patients with stable GOLD group C or D COPD and with a history of at least two moderate-to-severe exacerbations within the previous year will be randomly allocated to the control group, which receives routine follow-up, or the intervention group, which receives mIoT management. Endpoints of the study include (1) frequency and severity of acute exacerbation; (2) symptomatic evaluation; (3) pre- and post-bronchodilator forced expiratory volume in 1 second (FEV1) and FEV1/forced vital capacity (FVC) measurement; (4) exercise capacity; and (5) direct medical cost per year. Results from this study should provide direct evidence for the suitability of mIoT in stable COPD patient management. PMID:24082784
Braithwaite, Susan S.; Godara, Hemant; Song, Julie; Cairns, Bruce A.; Jones, Samuel W.; Umpierrez, Guillermo E.
2009-01-01
Background Algorithms for intravenous insulin infusion may assign the infusion rate (IR) by a two-step process. First, the previous insulin infusion rate (IRprevious) and the rate of change of blood glucose (BG) from the previous iteration of the algorithm are used to estimate the maintenance rate (MR) of insulin infusion. Second, the insulin IR for the next iteration (IRnext) is assigned to be commensurate with the MR and the distance of the current blood glucose (BGcurrent) from target. With use of a specific set of algorithm parameter values, a family of iso-MR curves is created, each giving IR as a function of MR and BG. Method To test the feasibility of estimating MR from the IRprevious and the previous rate of change of BG, historical hyperglycemic data points were used to compute the “maintenance rate cross step next estimate” (MRcsne). Historical cases had been treated with intravenous insulin infusion using a tabular protocol that estimated MR according to column-change rules. The mean IR on historical stable intervals (MRtrue), an estimate of the biologic value of MR, was compared to MRcsne during the hyperglycemic iteration immediately preceding the stable interval. Hypothetically calculated MRcsne-dependent IRnext was compared to IRnext assigned historically. An expanded theory of an algorithm is developed mathematically. Practical recommendations for computerization are proposed. Results The MRtrue determined on each of 30 stable intervals and the MRcsne during the immediately preceding hyperglycemic iteration differed, having medians with interquartile ranges 2.7 (1.2–3.7) and 3.2 (1.5–4.6) units/h, respectively. However, these estimates of MR were strongly correlated (R2 = 0.88). During hyperglycemia at 941 time points the IRnext assigned historically and the hypothetically calculated MRcsne-dependent IRnext differed, having medians with interquartile ranges 4.0 (3.0–6.0) and 4.6 (3.0–6.8) units/h, respectively, but these paired values again were correlated (R2 = 0.87). This article describes a programmable algorithm for intravenous insulin infusion. The fundamental equation of the algorithm gives the relationship among IR; the biologic parameter MR; and two variables expressing an instantaneous rate of change of BG, one of which must be zero at any given point in time and the other positive, negative, or zero, namely the rate of change of BG from below target (rate of ascent) and the rate of change of BG from above target (rate of descent). In addition to user-definable parameters, three special algorithm parameters discoverable in nature are described: the maximum rate of the spontaneous ascent of blood glucose during nonhypoglycemia, the glucose per daily dose of insulin exogenously mediated, and the MR at given patient time points. User-assignable parameters will facilitate adaptation to different patient populations. Conclusions An algorithm is described that estimates MR prior to the attainment of euglycemia and computes MR-dependent values for IRnext. Design features address glycemic variability, promote safety with respect to hypoglycemia, and define a method for specifying glycemic targets that are allowed to differ according to patient condition. PMID:20144334
Vaisman, Nachum; Shaltiel, Galit; Daniely, Michal; Meiron, Oren E; Shechter, Assaf; Abrams, Steven A; Niv, Eva; Shapira, Yami; Sagi, Amir
2014-10-01
Calcium supplementation is a widely recognized strategy for achieving adequate calcium intake. We designed this blinded, randomized, crossover interventional trial to compare the bioavailability of a new stable synthetic amorphous calcium carbonate (ACC) with that of crystalline calcium carbonate (CCC) using the dual stable isotope technique. The study was conducted in the Unit of Clinical Nutrition, Tel Aviv Sourasky Medical Center, Israel. The study population included 15 early postmenopausal women aged 54.9 ± 2.8 (mean ± SD) years with no history of major medical illness or metabolic bone disorder, excess calcium intake, or vitamin D deficiency. Standardized breakfast was followed by randomly provided CCC or ACC capsules containing 192 mg elemental calcium labeled with 44Ca at intervals of at least 3 weeks. After swallowing the capsules, intravenous CaCl2 labeled with 42Ca on was administered on each occasion. Fractional calcium absorption (FCA) of ACC and CCC was calculated from the 24-hour urine collection following calcium administration. The results indicated that FCA of ACC was doubled (± 0.96 SD) on average compared to that of CCC (p < 0.02). The higher absorption of the synthetic stable ACC may serve as a more efficacious way of calcium supplementation. © 2014 American Society for Bone and Mineral Research.
A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images
Xu, Songhua; Krauthammer, Michael
2010-01-01
There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper’s key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. In this paper, we demonstrate that a projection histogram-based text detection approach is well suited for text detection in biomedical images, with a performance of F score of .60. The approach performs better than comparable approaches for text detection. Further, we show that the iterative application of the algorithm is boosting overall detection performance. A C++ implementation of our algorithm is freely available through email request for academic use. PMID:20887803
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Wang, Youqing; Dassau, Eyal; Doyle, Francis J
2010-02-01
A novel combination of iterative learning control (ILC) and model predictive control (MPC), referred to here as model predictive iterative learning control (MPILC), is proposed for glycemic control in type 1 diabetes mellitus. MPILC exploits two key factors: frequent glucose readings made possible by continuous glucose monitoring technology; and the repetitive nature of glucose-meal-insulin dynamics with a 24-h cycle. The proposed algorithm can learn from an individual's lifestyle, allowing the control performance to be improved from day to day. After less than 10 days, the blood glucose concentrations can be kept within a range of 90-170 mg/dL. Generally, control performance under MPILC is better than that under MPC. The proposed methodology is robust to random variations in meal timings within +/-60 min or meal amounts within +/-75% of the nominal value, which validates MPILC's superior robustness compared to run-to-run control. Moreover, to further improve the algorithm's robustness, an automatic scheme for setpoint update that ensures safe convergence is proposed. Furthermore, the proposed method does not require user intervention; hence, the algorithm should be of particular interest for glycemic control in children and adolescents.
NASA Astrophysics Data System (ADS)
Cartarius, Holger; Musslimani, Ziad H.; Schwarz, Lukas; Wunner, Günter
2018-03-01
The spectral renormalization method was introduced in 2005 as an effective way to compute ground states of nonlinear Schrödinger and Gross-Pitaevskii type equations. In this paper, we introduce an orthogonal spectral renormalization (OSR) method to compute ground and excited states (and their respective eigenvalues) of linear and nonlinear eigenvalue problems. The implementation of the algorithm follows four simple steps: (i) reformulate the underlying eigenvalue problem as a fixed-point equation, (ii) introduce a renormalization factor that controls the convergence properties of the iteration, (iii) perform a Gram-Schmidt orthogonalization process in order to prevent the iteration from converging to an unwanted mode, and (iv) compute the solution sought using a fixed-point iteration. The advantages of the OSR scheme over other known methods (such as Newton's and self-consistency) are (i) it allows the flexibility to choose large varieties of initial guesses without diverging, (ii) it is easy to implement especially at higher dimensions, and (iii) it can easily handle problems with complex and random potentials. The OSR method is implemented on benchmark Hermitian linear and nonlinear eigenvalue problems as well as linear and nonlinear non-Hermitian PT -symmetric models.
Scalable splitting algorithms for big-data interferometric imaging in the SKA era
NASA Astrophysics Data System (ADS)
Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves
2016-11-01
In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.
Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo
2017-03-03
Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.
Randomized shortest-path problems: two related models.
Saerens, Marco; Achbany, Youssef; Fouss, François; Yen, Luh
2009-08-01
This letter addresses the problem of designing the transition probabilities of a finite Markov chain (the policy) in order to minimize the expected cost for reaching a destination node from a source node while maintaining a fixed level of entropy spread throughout the network (the exploration). It is motivated by the following scenario. Suppose you have to route agents through a network in some optimal way, for instance, by minimizing the total travel cost-nothing particular up to now-you could use a standard shortest-path algorithm. Suppose, however, that you want to avoid pure deterministic routing policies in order, for instance, to allow some continual exploration of the network, avoid congestion, or avoid complete predictability of your routing strategy. In other words, you want to introduce some randomness or unpredictability in the routing policy (i.e., the routing policy is randomized). This problem, which will be called the randomized shortest-path problem (RSP), is investigated in this work. The global level of randomness of the routing policy is quantified by the expected Shannon entropy spread throughout the network and is provided a priori by the designer. Then, necessary conditions to compute the optimal randomized policy-minimizing the expected routing cost-are derived. Iterating these necessary conditions, reminiscent of Bellman's value iteration equations, allows computing an optimal policy, that is, a set of transition probabilities in each node. Interestingly and surprisingly enough, this first model, while formulated in a totally different framework, is equivalent to Akamatsu's model ( 1996 ), appearing in transportation science, for a special choice of the entropy constraint. We therefore revisit Akamatsu's model by recasting it into a sum-over-paths statistical physics formalism allowing easy derivation of all the quantities of interest in an elegant, unified way. For instance, it is shown that the unique optimal policy can be obtained by solving a simple linear system of equations. This second model is therefore more convincing because of its computational efficiency and soundness. Finally, simulation results obtained on simple, illustrative examples show that the models behave as expected.
Pedestal evolution physics in low triangularity JET tokamak discharges with ITER-like wall
NASA Astrophysics Data System (ADS)
Bowman, C.; Dickinson, D.; Horvath, L.; Lunniss, A. E.; Wilson, H. R.; Cziegler, I.; Frassinetti, L.; Gibson, K.; Kirk, A.; Lipschultz, B.; Maggi, C. F.; Roach, C. M.; Saarelma, S.; Snyder, P. B.; Thornton, A.; Wynn, A.; Contributors, JET
2018-01-01
The pressure gradient of the high confinement pedestal region at the edge of tokamak plasmas rapidly collapses during plasma eruptions called edge localised modes (ELMs), and then re-builds over a longer time scale before the next ELM. The physics that controls the evolution of the JET pedestal between ELMs is analysed for 1.4 MA, 1.7 T, low triangularity, δ = 0.2, discharges with the ITER-like wall, finding that the pressure gradient typically tracks the ideal magneto-hydrodynamic ballooning limit, consistent with a role for the kinetic ballooning mode. Furthermore, the pedestal width is often influenced by the region of plasma that has second stability access to the ballooning mode, which can explain its sometimes complex evolution between ELMs. A local gyrokinetic analysis of a second stable flux surface reveals stability to kinetic ballooning modes; global effects are expected to provide a destabilising mechanism and need to be retained in such second stable situations. As well as an electron-scale electron temperature gradient mode, ion scale instabilities associated with this flux surface include an electro-magnetic trapped electron branch and two electrostatic branches propagating in the ion direction, one with high radial wavenumber. In these second stability situations, the ELM is triggered by a peeling-ballooning mode; otherwise the pedestal is somewhat below the peeling-ballooning mode marginal stability boundary at ELM onset. In this latter situation, there is evidence that higher frequency ELMs are paced by an oscillation in the plasma, causing a crash in the pedestal before the peeling-ballooning boundary is reached. A model is proposed in which the oscillation is associated with hot plasma filaments that are pushed out towards the plasma edge by a ballooning mode, draining their free energy into the cooler plasma there, and then relaxing back to repeat the process. The results suggest that avoiding the oscillation and maximising the region of plasma that has second stability access will lead to the highest pedestal heights and, therefore, best confinement—a key result for optimising the fusion performance of JET and future tokamaks, such as ITER.
2011-01-01
Background Available measures of patient-reported outcomes for complementary and alternative medicine (CAM) inadequately capture the range of patient-reported treatment effects. The Self-Assessment of Change questionnaire was developed to measure multi-dimensional shifts in well-being for CAM users. With content derived from patient narratives, items were subsequently focused through interviews on a new cohort of participants. Here we present the development of the final version in which the content and format is refined through cognitive interviews. Methods We conducted cognitive interviews across five iterations of questionnaire refinement with a culturally diverse sample of 28 CAM users. In each iteration, participant critiques were used to revise the questionnaire, which was then re-tested in subsequent rounds of cognitive interviews. Following all five iterations, transcripts of cognitive interviews were systematically coded and analyzed to examine participants' understanding of the format and content of the final questionnaire. Based on this data, we established summary descriptions and selected exemplar quotations for each word pair on the final questionnaire. Results The final version of the Self-Assessment of Change questionnaire (SAC) includes 16 word pairs, nine of which remained unchanged from the original draft. Participants consistently said that these stable word pairs represented opposite ends of the same domain of experience and the meanings of these terms were stable across the participant pool. Five pairs underwent revision and two word pairs were added. Four word pairs were eliminated for redundancy or because participants did not agree on the meaning of the terms. Cognitive interviews indicate that participants understood the format of the questionnaire and considered each word pair to represent opposite poles of a shared domain of experience. Conclusions We have placed lay language and direct experience at the center of questionnaire revision and refinement. In so doing, we provide an innovative model for the development of truly patient-centered outcome measures. Although this instrument was designed and tested in a CAM-specific population, it may be useful in assessing multi-dimensional shifts in well-being across a broader patient population. PMID:22206409
Wave propagation, scattering and emission in complex media
NASA Astrophysics Data System (ADS)
Jin, Ya-Qiu
I. Polarimetric scattering and SAR imagery. EM wave propagation and scattering in polarimetric SAR interferometry / S. R. Cloude. Terrain topographic inversion from single-pass polarimetric SAR image data by using polarimetric stokes parameters and morphological algorithm / Y. Q. Jin, L. Luo. Road detection in forested area using polarimetric SAR / G. W. Dong ... [et al.]. Research on some problems about SAR radiometric resolution / G. Dong ... [et al.]. A fast image matching algorithm for remote sensing applications / Z. Q. Hou ... [et al.]. A new algorithm of noised remote sensing image fusion based on steerable filters / X. Kang ... [et al.]. Adaptive noise reduction of InSAR data based on anisotropic diffusion models and their applications to phase unwrapping / C. Wang, X. Gao, H. Zhang -- II. Scattering from randomly rough surfaces. Modeling tools for backscattering from rough surfaces / A. K. Fung, K. S. Chen. Pseudo-nondiffracting beams from rough surface scattering / E. R. Méndez, T. A. Leskova, A. A. Maradudin. Surface roughness clutter effects in GPR modeling and detection / C. Rappaport. Scattering from rough surfaces with small slopes / M. Saillard, G. Soriano. Polarization and spectral characteristics of radar signals reflected by sea-surface / V. A. Butko, V. A. Khlusov, L. I. Sharygina. Simulation of microwave scattering from wind-driven ocean surfaces / M. Y. Xia ... [et al.]. HF surface wave radar tests at the Eastern China Sea / X. B. Wu ... [et al.] -- III. Electromagnetics of complex materials. Wave propagation in plane-parallel metamaterial and constitutive relations / A. Ishimaru ... [et al.]. Two dimensional periodic approach for the study of left-handed metamaterials / T. M. Grzegorczyk ... [et al.]. Numerical analysis of the effective constitutive parameters of a random medium containing small chiral spheres / Y. Nanbu, T. Matsuoka, M. Tateiba. Wave propagation in inhomogeneous media: from the Helmholtz to the Ginzburg -Landau equation / M. Gitterman. Transformation of the spectrum of scattered radiation in randomly inhomogeneous absorptive plasma layer / G. V. Jandieri, G. D. Aburjunia, V. G. Jandieri. Numerical analysis of microwave heating on saponification reaction / K. Huang, K. Jia -- IV. Scattering from complex targets. Analysis of electromagnetic scattering from layered crossed-gratings of circular cylinders using lattice sums technique / K. Yasumoto, H. T. Jia. Scattering by a body in a random medium / M. Tateiba, Z. Q. Meng, H. El-Ocla. A rigorous analysis of electromagnetic scattering from multilayered crossed-arrays of metallic cylinders / H. T. Jia, K. Yasumoto. Vector models of non-stable and spatially-distributed radar objects / A. Surkov ... [et al.]. Simulation of algorithm of orthogonal signals forming and processing used to estimate back scattering matrix of non-stable radar objects / D. Nosov ... [et al.]. New features of scattering from a dielectric film on a reflecting metal substrate / Z. H. Gu, I. M. Fuks, M. Ciftan. A higher order FDTD method for EM wave propagation in collision plasmas / S. B. Liu, J. J. Mo, N. C. Yuan -- V. Radiative transfer and remote sensing. Simulating microwave emission from Antarctica ice sheet with a coherent model / M. Tedesco, P. Pampaloni. Scattering and emission from inhomogeneous vegetation canopy and alien target by using three-dimensional Vector Radiative Transfer (3D-VRT) equation / Y. Q. Jin, Z. C. Liang. Analysis of land types using high-resolution satellite images and fractal approach / H. G. Zhang ... [et al.]. Data fusion of RADARSAT SAR and DMSP SSM/I for monitoring sea ice of China's Bohai Sea / Y. Q. Jin. Retrieving atmospheric temperature profiles from simulated microwave radiometer data with artificial neural networks / Z. G. Yao, H. B. Chen -- VI. Wave propagation and wireless communication. Wireless propagation in urban environments: modeling and experimental verification / D. Erricolo ... [et al.]. An overview of physics-based wave propagation in forested environment / K. Sarabandi, I. Koh. Angle-of-arrival fluctuations due to meteorological conditions in the diffraction zone of C-band radio waves, propagated over the ground surface / T. A. Tyufilina, A. A. Meschelyakov, M. V. Krutikov. Simulating radio channel statistics using ray based prediction codes / H. L. Bertoni. Measurement and simulation of ultra wideband antenna elements / W. Sörgel, W. Wiesbeck. The experimental investigation of a ground-placed radio complex synchronization system / V. P. Denisov ... [et al.] -- VII. Computational electromagnetics. Analysis of 3-D electromagnetic wave scattering with the Krylov subspace FFT iterative methods / R. S. Chen ... [et al.]. Sparse approximate inverse preconditioned iterative algorithm with block toeplitz matrix for fast analysis of microstrip circuits / L. Mo, R. S. Chen, E. K. N. Yung. An Efficient modified interpolation technique for the translation operators in MLFMA / J. Hu, Z. P. Nie, G. X. Zou. Efficient solution of 3-D vector electromagnetic scattering by CG-MLFMA with partly approximate iteration / J. Hu, Z. P. Nie. The effective constitution at interface of different media / L. G. Zheng, W. X. Zhang. Novel basis functions for quadratic hexahedral edge element / P. Liu ... [et al.]. A higher order FDTD method for EM wave propagation in collision plasmas / S. B. Liu, J. J. Mo, N. C. Yuan. Attenuation of electric field eradiated by underground source / J. P. Dong, Y. G. Gao.
Influence of ICRF heating on the stability of TAEs
NASA Astrophysics Data System (ADS)
Sears, J.; Burke, W.; Parker, R. R.; Snipes, J. A.; Wolfe, S.
2007-11-01
Unstable toroidicity-induced Alfv'en eigenmodes (TAEs) can appear spontaneously due to resonant interaction with fast particles such as fusion alphas, raising concern that TAEs may threaten ITER performance. This work investigates the progression of stable TAE damping rates toward instability during a scan of ICRF heating power up to 3.1 MW. Stable eigenmodes are identified in Alcator C-Mod by the Active MHD diagnostic. Unstable TAEs are observed to appear spontaneously in C-Mod limited L-mode plasmas at sufficient tail energies generated by >3 MW of ICRF heating. However preliminary analysis of experiments with moderate ICRF heating power show that TAE stability may not simply degrade with overall fast particle content. There are hints that the stability of some TAEs may be enhanced in the presence of fast particle distribution tails. Furthermore, the radial profile of the energetic particle distribution relative to the safety factor profile affects the ICRF power influence on TAE stability.
ERIC Educational Resources Information Center
Campbell, Rebecca; Greeson, Megan R.; Bybee, Deborah; Raja, Sheela
2008-01-01
This study examined the co-occurrence of childhood sexual abuse, adult sexual assault, intimate partner violence, and sexual harassment in a predominantly African American sample of 268 female veterans, randomly sampled from an urban Veterans Affairs hospital women's clinic. A combination of hierarchical and iterative cluster analysis was used to…
Robust High Data Rate MIMO Underwater Acoustic Communications
2010-12-31
algorithm is referred to as periodic CAN ( PeCAN ). Unlike most existing sequence construction methods which are algebraic and deterministic in nature, we...start the iteration of PeCAN from random phase initializations and then proceed to cyclically minimize the desired metric. In this way, through...by the foe and hence are especially useful as training sequences or as spreading sequences for UAC applications. We will use PeCAN sequences for
Distributed Matrix Completion: Application to Cooperative Positioning in Noisy Environments
2013-12-11
positioning, and a gossip version of low-rank approximation were developed. A convex relaxation for positioning in the presence of noise was shown to...of a large data matrix through gossip algorithms. A new algorithm is proposed that amounts to iteratively multiplying a vector by independent random...sparsification of the original matrix and averaging the resulting normalized vectors. This can be viewed as a generalization of gossip algorithms for
Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D
2016-03-01
The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Cuevas, Erik; Díaz, Margarita
2015-01-01
In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness. PMID:26339228
NASA Astrophysics Data System (ADS)
Sokołowski, Damian; Kamiński, Marcin
2018-01-01
This study proposes a framework for determination of basic probabilistic characteristics of the orthotropic homogenized elastic properties of the periodic composite reinforced with ellipsoidal particles and a high stiffness contrast between the reinforcement and the matrix. Homogenization problem, solved by the Iterative Stochastic Finite Element Method (ISFEM) is implemented according to the stochastic perturbation, Monte Carlo simulation and semi-analytical techniques with the use of cubic Representative Volume Element (RVE) of this composite containing single particle. The given input Gaussian random variable is Young modulus of the matrix, while 3D homogenization scheme is based on numerical determination of the strain energy of the RVE under uniform unit stretches carried out in the FEM system ABAQUS. The entire series of several deterministic solutions with varying Young modulus of the matrix serves for the Weighted Least Squares Method (WLSM) recovery of polynomial response functions finally used in stochastic Taylor expansions inherent for the ISFEM. A numerical example consists of the High Density Polyurethane (HDPU) reinforced with the Carbon Black particle. It is numerically investigated (1) if the resulting homogenized characteristics are also Gaussian and (2) how the uncertainty in matrix Young modulus affects the effective stiffness tensor components and their PDF (Probability Density Function).
Garofalo, Andrea M.; Burrell, Keith H.; Eldon, David; ...
2015-05-26
For the first time, DIII-D experiments have achieved stationary quiescent H-mode (QH-mode) operation for many energy confinement times at simultaneous ITER-relevant values of beta, confinement, and safety factor, in an ITER similar shape. QH-mode provides excellent energy confinement, even at very low plasma rotation, while operating without edge localized modes (ELMs) and with strong impurity transport via the benign edge harmonic oscillation (EHO). By tailoring the plasma shape to improve the edge stability, the QH-mode operating space has also been extended to densities exceeding 80% of the Greenwald limit, overcoming the long-standing low-density limit of QH-mode operation. In the theory,more » the density range over which the plasma encounters the kink-peeling boundary widens as the plasma cross-section shaping is increased, thus increasing the QH-mode density threshold. Here, the DIII-D results are in excellent agreement with these predictions, and nonlinear MHD analysis of reconstructed QH-mode equilibria shows unstable low n kink-peeling modes growing to a saturated level, consistent with the theoretical picture of the EHO. Furthermore, high density operation in the QH-mode regime has opened a path to a new, previously predicted region of parameter space, named “Super H-mode” because it is characterized by very high pedestals that can be more than a factor of two above the peeling-ballooning stability limit for similar ELMing H-mode discharges at the same density.« less
Dynamic probability of reinforcement for cooperation: Random game termination in the centipede game.
Krockow, Eva M; Colman, Andrew M; Pulford, Briony D
2018-03-01
Experimental games have previously been used to study principles of human interaction. Many such games are characterized by iterated or repeated designs that model dynamic relationships, including reciprocal cooperation. To enable the study of infinite game repetitions and to avoid endgame effects of lower cooperation toward the final game round, investigators have introduced random termination rules. This study extends previous research that has focused narrowly on repeated Prisoner's Dilemma games by conducting a controlled experiment of two-player, random termination Centipede games involving probabilistic reinforcement and characterized by the longest decision sequences reported in the empirical literature to date (24 decision nodes). Specifically, we assessed mean exit points and cooperation rates, and compared the effects of four different termination rules: no random game termination, random game termination with constant termination probability, random game termination with increasing termination probability, and random game termination with decreasing termination probability. We found that although mean exit points were lower for games with shorter expected game lengths, the subjects' cooperativeness was significantly reduced only in the most extreme condition with decreasing computer termination probability and an expected game length of two decision nodes. © 2018 Society for the Experimental Analysis of Behavior.
Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much.
He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher
2016-01-01
Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance.
Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much
He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher
2016-01-01
Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance. PMID:28344429
NASA Astrophysics Data System (ADS)
Jeong, Woodon; Kang, Minji; Kim, Shinwoong; Min, Dong-Joo; Kim, Won-Ki
2015-06-01
Seismic full waveform inversion (FWI) has primarily been based on a least-squares optimization problem for data residuals. However, the least-squares objective function can suffer from its weakness and sensitivity to noise. There have been numerous studies to enhance the robustness of FWI by using robust objective functions, such as l 1-norm-based objective functions. However, the l 1-norm can suffer from a singularity problem when the residual wavefield is very close to zero. Recently, Student's t distribution has been applied to acoustic FWI to give reasonable results for noisy data. Student's t distribution has an overdispersed density function compared with the normal distribution, and is thus useful for data with outliers. In this study, we investigate the feasibility of Student's t distribution for elastic FWI by comparing its basic properties with those of the l 2-norm and l 1-norm objective functions and by applying the three methods to noisy data. Our experiments show that the l 2-norm is sensitive to noise, whereas the l 1-norm and Student's t distribution objective functions give relatively stable and reasonable results for noisy data. When noise patterns are complicated, i.e., due to a combination of missing traces, unexpected outliers, and random noise, FWI based on Student's t distribution gives better results than l 1- and l 2-norm FWI. We also examine the application of simultaneous-source methods to acoustic FWI based on Student's t distribution. Computing the expectation of the coefficients of gradient and crosstalk noise terms and plotting the signal-to-noise ratio with iteration, we were able to confirm that crosstalk noise is suppressed as the iteration progresses, even when simultaneous-source FWI is combined with Student's t distribution. From our experiments, we conclude that FWI based on Student's t distribution can retrieve subsurface material properties with less distortion from noise than l 1- and l 2-norm FWI, and the simultaneous-source method can be adopted to improve the computational efficiency of FWI based on Student's t distribution.
Zhang, Zhe; Zhang, Fan; Wang, Yang; Du, Yi; Zhang, Huiyong; Kong, Dezhao; Liu, Yue; Yang, Guanlin
2014-10-30
Stable angina pectoris is experienced as trans-sternal or retro-sternal pressure or pain that may radiate to the left arm, neck or back. Although available evidence relating to its effectiveness and mechanism are weak, traditional Chinese medicine is used as an alternative therapy for stable angina pectoris. We report a protocol of a randomized controlled trial using traditional Chinese medicine to investigate the effectiveness, mechanism and safety for patients with stable angina pectoris. This is a north-east Chinese, multi-center, multi-blinded, placebo-controlled and superiority randomized trail. A total of 240 patients with stable angina pectoris will be randomly assigned to three groups: two treatment groups and a control group. The treatment groups will receive Chinese herbal medicine consisting of Yi-Qi-Jian-Pi and Qu-Tan-Hua-Zhuo granule and Yi-Qi-Jian-Pi and Qu-Tan-Hua-Yu granule, respectively, and conventional medicine. The control group will receive placebo medicine in addition to conventional medicine. All 3 groups will undergo a 12-week treatment and 2-week follow-up. Four visits in sum will be scheduled for each subject: 1 visit each in week 0, week 4, week 12 and week 14. The primary outcomes include: the frequency of angina pectoris attack; the dosage of nitroglycerin; body limited dimension of Seattle Angina Questionnaire. The secondary outcomes include: except for the body limited dimension of SAQ, traditional Chinese medicine pattern questionnaire and so on. Therapeutic mechanism outcomes, safety outcomes and endpoint outcomes will be also assessed. The primary aim of this trial is to develop a standard protocol to utilize high-quality EBM evidence for assessing the effectiveness and safety of SAP via TCM pattern differentiation as well as exploring the efficacy mechanism and regulation with the molecular biology and systems biology. ChiCTR-TRC-13003608, registered 18 June 2013.
Guided particle swarm optimization method to solve general nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Abdelhalim, Alyaa; Nakata, Kazuhide; El-Alem, Mahmoud; Eltawil, Amr
2018-04-01
The development of hybrid algorithms is becoming an important topic in the global optimization research area. This article proposes a new technique in hybridizing the particle swarm optimization (PSO) algorithm and the Nelder-Mead (NM) simplex search algorithm to solve general nonlinear unconstrained optimization problems. Unlike traditional hybrid methods, the proposed method hybridizes the NM algorithm inside the PSO to improve the velocities and positions of the particles iteratively. The new hybridization considers the PSO algorithm and NM algorithm as one heuristic, not in a sequential or hierarchical manner. The NM algorithm is applied to improve the initial random solution of the PSO algorithm and iteratively in every step to improve the overall performance of the method. The performance of the proposed method was tested over 20 optimization test functions with varying dimensions. Comprehensive comparisons with other methods in the literature indicate that the proposed solution method is promising and competitive.
Evolutionary engineering for industrial microbiology.
Vanee, Niti; Fisher, Adam B; Fong, Stephen S
2012-01-01
Superficially, evolutionary engineering is a paradoxical field that balances competing interests. In natural settings, evolution iteratively selects and enriches subpopulations that are best adapted to a particular ecological niche using random processes such as genetic mutation. In engineering desired approaches utilize rational prospective design to address targeted problems. When considering details of evolutionary and engineering processes, more commonality can be found. Engineering relies on detailed knowledge of the problem parameters and design properties in order to predict design outcomes that would be an optimized solution. When detailed knowledge of a system is lacking, engineers often employ algorithmic search strategies to identify empirical solutions. Evolution epitomizes this iterative optimization by continuously diversifying design options from a parental design, and then selecting the progeny designs that represent satisfactory solutions. In this chapter, the technique of applying the natural principles of evolution to engineer microbes for industrial applications is discussed to highlight the challenges and principles of evolutionary engineering.
Kinetics of carbide formation in the molybdenum-tungsten coatings used in the ITER-like Wall
NASA Astrophysics Data System (ADS)
Maier, H.; Rasinski, M.; von Toussaint, U.; Greuner, H.; Böswirth, B.; Balden, M.; Elgeti, S.; Ruset, C.; Matthews, G. F.
2016-02-01
The kinetics of tungsten carbide formation was investigated for tungsten coatings on carbon fibre composite with a molybdenum interlayer as they are used in the ITER-like Wall in JET. The coatings were produced by combined magnetron sputtering and ion implantation. The investigation was performed by preparing focused ion beam cross sections from samples after heat treatment in argon atmosphere. Baking of the samples was done at temperatures of 1100 °C, 1200 °C, and 1350 °C for hold times between 30 min and 20 h. It was found that the data can be well described by a diffusional random walk with a thermally activated diffusion process. The activation energy was determined to be (3.34 ± 0.11) eV. Predictions for the isothermal lifetime of this coating system were computed from this information.
LCAMP: Location Constrained Approximate Message Passing for Compressed Sensing MRI
Sung, Kyunghyun; Daniel, Bruce L; Hargreaves, Brian A
2016-01-01
Iterative thresholding methods have been extensively studied as faster alternatives to convex optimization methods for solving large-sized problems in compressed sensing. A novel iterative thresholding method called LCAMP (Location Constrained Approximate Message Passing) is presented for reducing computational complexity and improving reconstruction accuracy when a nonzero location (or sparse support) constraint can be obtained from view shared images. LCAMP modifies the existing approximate message passing algorithm by replacing the thresholding stage with a location constraint, which avoids adjusting regularization parameters or thresholding levels. This work is first compared with other conventional reconstruction methods using random 1D signals and then applied to dynamic contrast-enhanced breast MRI to demonstrate the excellent reconstruction accuracy (less than 2% absolute difference) and low computation time (5 - 10 seconds using Matlab) with highly undersampled 3D data (244 × 128 × 48; overall reduction factor = 10). PMID:23042658
Efficient fractal-based mutation in evolutionary algorithms from iterated function systems
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.
2018-03-01
In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Froio, A.; Bonifetto, R.; Carli, S.
In superconducting tokamaks, the cryoplant provides the helium needed to cool different clients, among which by far the most important one is the superconducting magnet system. The evaluation of the transient heat load from the magnets to the cryoplant is fundamental for the design of the latter and the assessment of suitable strategies to smooth the heat load pulses, induced by the intrinsically pulsed plasma scenarios characteristic of today's tokamaks, is crucial for both suitable sizing and stable operation of the cryoplant. For that evaluation, accurate but expensive system-level models, as implemented in e.g. the validated state-of-the-art 4C code, weremore » developed in the past, including both the magnets and the respective external cryogenic cooling circuits. Here we show how these models can be successfully substituted with cheaper ones, where the magnets are described by suitably trained Artificial Neural Networks (ANNs) for the evaluation of the heat load to the cryoplant. First, two simplified thermal-hydraulic models for an ITER Toroidal Field (TF) magnet and for the ITER Central Solenoid (CS) are developed, based on ANNs, and a detailed analysis of the chosen networks' topology and parameters is presented and discussed. The ANNs are then inserted into the 4C model of the ITER TF and CS cooling circuits, which also includes active controls to achieve a smoothing of the variation of the heat load to the cryoplant. The training of the ANNs is achieved using the results of full 4C simulations (including detailed models of the magnets) for conventional sigmoid-like waveforms of the drivers and the predictive capabilities of the ANN-based models in the case of actual ITER operating scenarios are demonstrated by comparison with the results of full 4C runs, both with and without active smoothing, in terms of both accuracy and computational time. Exploiting the low computational effort requested by the ANN-based models, a demonstrative optimization study has been finally carried out, with the aim of choosing among different smoothing strategies for the standard ITER plasma operation.« less
Coherent diffractive imaging using randomly coded masks
Seaberg, Matthew H.; d'Aspremont, Alexandre; Turner, Joshua J.
2015-12-07
We experimentally demonstrate an extension to coherent diffractive imaging that encodes additional information through the use of a series of randomly coded masks, removing the need for typical object-domain constraints while guaranteeing a unique solution to the phase retrieval problem. Phase retrieval is performed using a numerical convex relaxation routine known as “PhaseCut,” an iterative algorithm known for its stability and for its ability to find the global solution, which can be found efficiently and which is robust to noise. As a result, the experiment is performed using a laser diode at 532.2 nm, enabling rapid prototyping for future X-raymore » synchrotron and even free electron laser experiments.« less
Simulated annealing in networks for computing possible arrangements for red and green cones
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.
1987-01-01
Attention is given to network models in which each of the cones of the retina is given a provisional color at random, and then the cones are allowed to determine the colors of their neighbors through an iterative process. A symmetric-structure spin-glass model has allowed arrays to be generated from completely random arrangements of red and green to arrays with approximately as much disorder as the parafoveal cones. Simulated annealing has also been added to the process in an attempt to generate color arrangements with greater regularity and hence more revealing moirepatterns than than the arrangements yielded by quenched spin-glass processes. Attention is given to the perceptual implications of these results.
Phase-only asymmetric optical cryptosystem based on random modulus decomposition
NASA Astrophysics Data System (ADS)
Xu, Hongfeng; Xu, Wenhui; Wang, Shuaihua; Wu, Shaofan
2018-06-01
We propose a phase-only asymmetric optical cryptosystem based on random modulus decomposition (RMD). The cryptosystem is presented for effectively improving the capacity to resist various attacks, including the attack of iterative algorithms. On the one hand, RMD and phase encoding are combined to remove the constraints that can be used in the attacking process. On the other hand, the security keys (geometrical parameters) introduced by Fresnel transform can increase the key variety and enlarge the key space simultaneously. Numerical simulation results demonstrate the strong feasibility, security and robustness of the proposed cryptosystem. This cryptosystem will open up many new opportunities in the application fields of optical encryption and authentication.
Coherent diffractive imaging using randomly coded masks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seaberg, Matthew H., E-mail: seaberg@slac.stanford.edu; Linac Coherent Light Source, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025; D'Aspremont, Alexandre
2015-12-07
We experimentally demonstrate an extension to coherent diffractive imaging that encodes additional information through the use of a series of randomly coded masks, removing the need for typical object-domain constraints while guaranteeing a unique solution to the phase retrieval problem. Phase retrieval is performed using a numerical convex relaxation routine known as “PhaseCut,” an iterative algorithm known for its stability and for its ability to find the global solution, which can be found efficiently and which is robust to noise. The experiment is performed using a laser diode at 532.2 nm, enabling rapid prototyping for future X-ray synchrotron and even freemore » electron laser experiments.« less
NASA Astrophysics Data System (ADS)
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Nicol, Andrew J; Navsaria, Pradeep H; Hommes, Martijn; Ball, Chad G; Edu, Sorin; Kahn, Delawir
2014-03-01
To determine if stable patients with a hemopericardium detected after penetrating chest trauma can be safely managed with pericardial drainage alone. The current international practice is to perform a sternotomy and cardiac repair if a hemopericardium is detected after penetrating chest trauma. The experience in Cape Town, South Africa, on performing a mandatory sternotomy in hemodynamically stable patients was that a sternotomy was unnecessary and the cardiac injury, if present, had sealed. A single-center parallel-group randomized controlled study was completed. All hemodynamically stable patients with a hemopericardium confirmed at subxiphoid pericardial window (SPW), and no active bleeding, were randomized. The primary outcome measure was survival to discharge from hospital. Secondary outcomes were complications and postoperative hospital stay. Fifty-five patients were randomized to sternotomy and 56 to pericardial drainage and wash-out only. Fifty-one of the 55 patients (93%) randomized to sternotomy had either no cardiac injury or a tangential injury. There were only 4 patients with penetrating wounds to the endocardium and all had sealed. There was 1 death postoperatively among the 111 patients (0.9%) and this was in the sternotomy group. The mean intensive care unit (ICU) stay for a sternotomy was 2.04 days (range, 0-25 days) compared with 0.25 days (range, 0-2) for the drainage (P < 0.001). The estimated mean difference highlighted a stay of 1.8 days shorter in the ICU for the drainage group (95% CI: 0.8-2.7). Total hospital stay was significantly shorter in the SPW group (P < 0.001; 95% CI: 1.4-3.3). SPW and drainage is effective and safe in the stable patient with a hemopericardium after penetrating chest trauma, with no increase in mortality and a shorter ICU and hospital stay. (ClinicalTrials.gov Identifier: NCT00823160).
NASA Astrophysics Data System (ADS)
Sips, A. C. C.; Giruzzi, G.; Ide, S.; Kessel, C.; Luce, T. C.; Snipes, J. A.; Stober, J. K.
2015-02-01
The development of operating scenarios is one of the key issues in the research for ITER which aims to achieve a fusion gain (Q) of ˜10, while producing 500 MW of fusion power for ≥300 s. The ITER Research plan proposes a success oriented schedule starting in hydrogen and helium, to be followed by a nuclear operation phase with a rapid development towards Q ˜ 10 in deuterium/tritium. The Integrated Operation Scenarios Topical Group of the International Tokamak Physics Activity initiates joint activities among worldwide institutions and experiments to prepare ITER operation. Plasma formation studies report robust plasma breakdown in devices with metal walls over a wide range of conditions, while other experiments use an inclined EC launch angle at plasma formation to mimic the conditions in ITER. Simulations of the plasma burn-through predict that at least 4 MW of Electron Cyclotron heating (EC) assist would be required in ITER. For H-modes at q95 ˜ 3, many experiments have demonstrated operation with scaled parameters for the ITER baseline scenario at ne/nGW ˜ 0.85. Most experiments, however, obtain stable discharges at H98(y,2) ˜ 1.0 only for βN = 2.0-2.2. For the rampup in ITER, early X-point formation is recommended, allowing auxiliary heating to reduce the flux consumption. A range of plasma inductance (li(3)) can be obtained from 0.65 to 1.0, with the lowest values obtained in H-mode operation. For the rampdown, the plasma should stay diverted maintaining H-mode together with a reduction of the elongation from 1.85 to 1.4. Simulations show that the proposed rampup and rampdown schemes developed since 2007 are compatible with the present ITER design for the poloidal field coils. At 13-15 MA and densities down to ne/nGW ˜ 0.5, long pulse operation (>1000 s) in ITER is possible at Q ˜ 5, useful to provide neutron fluence for Test Blanket Module assessments. ITER scenario preparation in hydrogen and helium requires high input power (>50 MW). H-mode operation in helium may be possible at input powers above 35 MW at a toroidal field of 2.65 T, for studying H-modes and ELM mitigation. In hydrogen, H-mode operation is expected to be marginal, even at 2.65 T with 60 MW of input power. Simulation code benchmark studies using hybrid and steady state scenario parameters have proved to be a very challenging and lengthy task of testing suites of codes, consisting of tens of sophisticated modules. Nevertheless, the general basis of the modelling appears sound, with substantial consistency among codes developed by different groups. For a hybrid scenario at 12 MA, the code simulations give a range for Q = 6.5-8.3, using 30 MW neutral beam injection and 20 MW ICRH. For non-inductive operation at 7-9 MA, the simulation results show more variation. At high edge pedestal pressure (Tped ˜ 7 keV), the codes predict Q = 3.3-3.8 using 33 MW NB, 20 MW EC, and 20 MW ion cyclotron to demonstrate the feasibility of steady-state operation with the day-1 heating systems in ITER. Simulations using a lower edge pedestal temperature (˜3 keV) but improved core confinement obtain Q = 5-6.5, when ECCD is concentrated at mid-radius and ˜20 MW off-axis current drive (ECCD or LHCD) is added. Several issues remain to be studied, including plasmas with dominant electron heating, mitigation of transient heat loads integrated in scenario demonstrations and (burn) control simulations in ITER scenarios.
Altstein, L.; Li, G.
2012-01-01
Summary This paper studies a semiparametric accelerated failure time mixture model for estimation of a biological treatment effect on a latent subgroup of interest with a time-to-event outcome in randomized clinical trials. Latency is induced because membership is observable in one arm of the trial and unidentified in the other. This method is useful in randomized clinical trials with all-or-none noncompliance when patients in the control arm have no access to active treatment and in, for example, oncology trials when a biopsy used to identify the latent subgroup is performed only on subjects randomized to active treatment. We derive a computational method to estimate model parameters by iterating between an expectation step and a weighted Buckley-James optimization step. The bootstrap method is used for variance estimation, and the performance of our method is corroborated in simulation. We illustrate our method through an analysis of a multicenter selective lymphadenectomy trial for melanoma. PMID:23383608
Wang, Xiaogang; Chen, Wen; Chen, Xudong
2015-03-09
In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.
Enhancing Scalability and Efficiency of the TOUGH2_MP for LinuxClusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Wu, Yu-Shu
2006-04-17
TOUGH2{_}MP, the parallel version TOUGH2 code, has been enhanced by implementing more efficient communication schemes. This enhancement is achieved through reducing the amount of small-size messages and the volume of large messages. The message exchange speed is further improved by using non-blocking communications for both linear and nonlinear iterations. In addition, we have modified the AZTEC parallel linear-equation solver to nonblocking communication. Through the improvement of code structuring and bug fixing, the new version code is now more stable, while demonstrating similar or even better nonlinear iteration converging speed than the original TOUGH2 code. As a result, the new versionmore » of TOUGH2{_}MP is improved significantly in its efficiency. In this paper, the scalability and efficiency of the parallel code are demonstrated by solving two large-scale problems. The testing results indicate that speedup of the code may depend on both problem size and complexity. In general, the code has excellent scalability in memory requirement as well as computing time.« less
Scalable Nonlinear Solvers for Fully Implicit Coupled Nuclear Fuel Modeling. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Xiao-Chuan; Keyes, David; Yang, Chao
2014-09-29
The focus of the project is on the development and customization of some highly scalable domain decomposition based preconditioning techniques for the numerical solution of nonlinear, coupled systems of partial differential equations (PDEs) arising from nuclear fuel simulations. These high-order PDEs represent multiple interacting physical fields (for example, heat conduction, oxygen transport, solid deformation), each is modeled by a certain type of Cahn-Hilliard and/or Allen-Cahn equations. Most existing approaches involve a careful splitting of the fields and the use of field-by-field iterations to obtain a solution of the coupled problem. Such approaches have many advantages such as ease of implementationmore » since only single field solvers are needed, but also exhibit disadvantages. For example, certain nonlinear interactions between the fields may not be fully captured, and for unsteady problems, stable time integration schemes are difficult to design. In addition, when implemented on large scale parallel computers, the sequential nature of the field-by-field iterations substantially reduces the parallel efficiency. To overcome the disadvantages, fully coupled approaches have been investigated in order to obtain full physics simulations.« less
In-situ Testing of the EHT High Gain and Frequency Ultra-Stable Integrators
NASA Astrophysics Data System (ADS)
Miller, Kenneth; Ziemba, Timothy; Prager, James; Slobodov, Ilia; Lotz, Dan
2014-10-01
Eagle Harbor Technologies (EHT) has developed a long-pulse integrator that exceeds the ITER specification for integration error and pulse duration. During the Phase I program, EHT improved the RPPL short-pulse integrators, added a fast digital reset, and demonstrated that the new integrators exceed the ITER integration error and pulse duration requirements. In Phase II, EHT developed Field Programmable Gate Array (FPGA) software that allows for integrator control and real-time signal digitization and processing. In the second year of Phase II, the EHT integrator will be tested at a validation platform experiment (HIT-SI) and tokamak (DIII-D). In the Phase IIB program, EHT will continue development of the EHT integrator to reduce overall cost per channel. EHT will test lower cost components, move to surface mount components, and add an onboard Field Programmable Gate Array and data acquisition to produce a stand-alone system with lower cost per channel and increased the channel density. EHT will test the Phase IIB integrator at a validation platform experiment (HIT-SI) and tokamak (DIII-D). Work supported by the DOE under Contract Number (DE-SC0006281).
Tomographic iterative reconstruction of a passive scalar in a 3D turbulent flow
NASA Astrophysics Data System (ADS)
Pisso, Ignacio; Kylling, Arve; Cassiani, Massimo; Solveig Dinger, Anne; Stebel, Kerstin; Schmidbauer, Norbert; Stohl, Andreas
2017-04-01
Turbulence in stable planetary boundary layers often encountered in high latitudes influences the exchange fluxes of heat, momentum, water vapor and greenhouse gases between the Earth's surface and the atmosphere. In climate and meteorological models, such effects of turbulence need to be parameterized, ultimately based on experimental data. A novel experimental approach is being developed within the COMTESSA project in order to study turbulence statistics at high resolution. Using controlled tracer releases, high-resolution camera images and estimates of the background radiation, different tomographic algorithms can be applied in order to obtain time series of 3D representations of the scalar dispersion. In this preliminary work, using synthetic data, we investigate different reconstruction algorithms with emphasis on algebraic methods. We study the dependence of the reconstruction quality on the discretization resolution and the geometry of the experimental device in both 2 and 3-D cases. We assess the computational aspects of the iterative algorithms focusing of the phenomenon of semi-convergence applying a variety of stopping rules. We discuss different strategies for error reduction and regularization of the ill-posed problem.
Finite volume multigrid method of the planar contraction flow of a viscoelastic fluid
NASA Astrophysics Data System (ADS)
Moatssime, H. Al; Esselaoui, D.; Hakim, A.; Raghay, S.
2001-08-01
This paper reports on a numerical algorithm for the steady flow of viscoelastic fluid. The conservative and constitutive equations are solved using the finite volume method (FVM) with a hybrid scheme for the velocities and first-order upwind approximation for the viscoelastic stress. A non-uniform staggered grid system is used. The iterative SIMPLE algorithm is employed to relax the coupled momentum and continuity equations. The non-linear algebraic equations over the flow domain are solved iteratively by the symmetrical coupled Gauss-Seidel (SCGS) method. In both, the full approximation storage (FAS) multigrid algorithm is used. An Oldroyd-B fluid model was selected for the calculation. Results are reported for planar 4:1 abrupt contraction at various Weissenberg numbers. The solutions are found to be stable and smooth. The solutions show that at high Weissenberg number the domain must be long enough. The convergence of the method has been verified with grid refinement. All the calculations have been performed on a PC equipped with a Pentium III processor at 550 MHz. Copyright
A blind deconvolution method based on L1/L2 regularization prior in the gradient space
NASA Astrophysics Data System (ADS)
Cai, Ying; Shi, Yu; Hua, Xia
2018-02-01
In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.
New discretization and solution techniques for incompressible viscous flow problems
NASA Technical Reports Server (NTRS)
Gunzburger, M. D.; Nicolaides, R. A.; Liu, C. H.
1983-01-01
This paper considers several topics arising in the finite element solution of the incompressible Navier-Stokes equations. Specifically, the question of choosing finite element velocity/pressure spaces is addressed, particularly from the viewpoint of achieving stable discretizations leading to convergent pressure approximations. Following this, the role of artificial viscosity in viscous flow calculations is studied, emphasizing recent work by several researchers for the anisotropic case. The last section treats the problem of solving the nonlinear systems of equations which arise from the discretization. Time marching methods and classical iterative techniques, as well as some recent modifications are mentioned.
Schou, Morten; Gustafsson, Finn; Videbaek, Lars; Markenvard, John; Ulriksen, Hans; Ryde, Henrik; Jensen, Jens C H; Nielsen, Tonny; Knudsen, Anne S; Tuxen, Christian D; Handberg, Jens; Sørensen, Per J; Espersen, Geert; Lind-Rasmussen, Søren; Keller, Niels; Egstrup, Kenneth; Nielsen, Olav W; Abdulla, Jawdat; Nyvad, Ole; Toft, Jens; Hildebrandt, Per R
2008-10-01
Randomized clinical trials have shown that newly discharged and symptomatic patients with chronic heart failure (CHF) benefit from follow-up in a specialized heart failure clinic (HFC). Clinical stable and educated patients are usually discharged from the HFC when on optimal therapy. It is unknown if risk stratification using natriuretic peptides could identify patients who would benefit from longer-term follow-up. Furthermore, data on the use of natriuretic peptides for monitoring of stable patients with CHF are sparse. The aims of this study are to test the hypothesis that clinical stable, educated, and medical optimized patients with CHF with N-terminal pro-brain natriuretic peptide (NT-proBNP) levels > or = 1,000 pg/mL benefit from long-term follow-up in an HFC and to assess the efficacy of NT-proBNP monitoring. A total of 1,250 clinically stable, medically optimized, and educated patients with CHF will be enrolled from 18 HFCs in Denmark. The patients will be randomized to treatment in general practice, to a standard follow-up program in the HFC, or to NT-proBNP monitoring in the HFC. The patients will be followed for 30 months (median). Data will be collected from 2006 to 2009. At present (March 2008), 720 patients are randomized. Results expect to be presented in the second half of 2010. This article outlines the design of the NorthStar study. If our hypotheses are confirmed, the results will help cardiologists and nurses in HFCs to identify patients who may benefit from long-term follow-up. Our results may also indicate whether patients with CHF will benefit from adding serial NT-proBNP measurements to usual clinical monitoring.
Random diffusion and cooperation in continuous two-dimensional space.
Antonioni, Alberto; Tomassini, Marco; Buesser, Pierre
2014-03-07
This work presents a systematic study of population games of the Prisoner's Dilemma, Hawk-Dove, and Stag Hunt types in two-dimensional Euclidean space under two-person, one-shot game-theoretic interactions, and in the presence of agent random mobility. The goal is to investigate whether cooperation can evolve and be stable when agents can move randomly in continuous space. When the agents all have the same constant velocity cooperation may evolve if the agents update their strategies imitating the most successful neighbor. If a fitness difference proportional is used instead, cooperation does not improve with respect to the static random geometric graph case. When viscosity effects set-in and agent velocity becomes a quickly decreasing function of the number of neighbors they have, one observes the formation of monomorphic stable clusters of cooperators or defectors in the Prisoner's Dilemma. However, cooperation does not spread in the population as in the constant velocity case. Copyright © 2013 Elsevier Ltd. All rights reserved.
Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y
2017-11-10
Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.
PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering
NASA Astrophysics Data System (ADS)
Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.
2016-02-01
Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.
An efficient algorithm for the generalized Foldy-Lax formulation
NASA Astrophysics Data System (ADS)
Huang, Kai; Li, Peijun; Zhao, Hongkai
2013-02-01
Consider the scattering of a time-harmonic plane wave incident on a two-scale heterogeneous medium, which consists of scatterers that are much smaller than the wavelength and extended scatterers that are comparable to the wavelength. In this work we treat those small scatterers as isotropic point scatterers and use a generalized Foldy-Lax formulation to model wave propagation and capture multiple scattering among point scatterers and extended scatterers. Our formulation is given as a coupled system, which combines the original Foldy-Lax formulation for the point scatterers and the regular boundary integral equation for the extended obstacle scatterers. The existence and uniqueness of the solution for the formulation is established in terms of physical parameters such as the scattering coefficient and the separation distances. Computationally, an efficient physically motivated Gauss-Seidel iterative method is proposed to solve the coupled system, where only a linear system of algebraic equations for point scatterers or a boundary integral equation for a single extended obstacle scatterer is required to solve at each step of iteration. The convergence of the iterative method is also characterized in terms of physical parameters. Numerical tests for the far-field patterns of scattered fields arising from uniformly or randomly distributed point scatterers and single or multiple extended obstacle scatterers are presented.
NASA Astrophysics Data System (ADS)
Volpe, F. A.; Frassinetti, L.; Brunsell, P. R.; Drake, J. R.; Olofsson, K. E. J.
2013-04-01
A new non-disruptive error field (EF) assessment technique not restricted to low density and thus low beta was demonstrated at the EXTRAP-T2R reversed field pinch. Stable and marginally stable external kink modes of toroidal mode number n = 10 and n = 8, respectively, were generated, and their rotation sustained, by means of rotating magnetic perturbations of the same n. Due to finite EFs, and in spite of the applied perturbations rotating uniformly and having constant amplitude, the kink modes were observed to rotate non-uniformly and be modulated in amplitude. This behaviour was used to precisely infer the amplitude and approximately estimate the toroidal phase of the EF. A subsequent scan permitted to optimize the toroidal phase. The technique was tested against deliberately applied as well as intrinsic EFs of n = 8 and 10. Corrections equal and opposite to the estimated error fields were applied. The efficacy of the error compensation was indicated by the increased discharge duration and more uniform mode rotation in response to a uniformly rotating perturbation. The results are in good agreement with theory, and the extension to lower n, to tearing modes and to tokamaks, including ITER, is discussed.
[Theory, method and application of method R on estimation of (co)variance components].
Liu, Wen-Zhong
2004-07-01
Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.
NASA Astrophysics Data System (ADS)
Karaoǧlu, Haydar; Romanowicz, Barbara
2018-06-01
We present a global upper-mantle shear wave attenuation model that is built through a hybrid full-waveform inversion algorithm applied to long-period waveforms, using the spectral element method for wavefield computations. Our inversion strategy is based on an iterative approach that involves the inversion for successive updates in the attenuation parameter (δ Q^{-1}_μ) and elastic parameters (isotropic velocity VS, and radial anisotropy parameter ξ) through a Gauss-Newton-type optimization scheme that employs envelope- and waveform-type misfit functionals for the two steps, respectively. We also include source and receiver terms in the inversion steps for attenuation structure. We conducted a total of eight iterations (six for attenuation and two for elastic structure), and one inversion for updates to source parameters. The starting model included the elastic part of the relatively high-resolution 3-D whole mantle seismic velocity model, SEMUCB-WM1, which served to account for elastic focusing effects. The data set is a subset of the three-component surface waveform data set, filtered between 400 and 60 s, that contributed to the construction of the whole-mantle tomographic model SEMUCB-WM1. We applied strict selection criteria to this data set for the attenuation iteration steps, and investigated the effect of attenuation crustal structure on the retrieved mantle attenuation structure. While a constant 1-D Qμ model with a constant value of 165 throughout the upper mantle was used as starting model for attenuation inversion, we were able to recover, in depth extent and strength, the high-attenuation zone present in the depth range 80-200 km. The final 3-D model, SEMUCB-UMQ, shows strong correlation with tectonic features down to 200-250 km depth, with low attenuation beneath the cratons, stable parts of continents and regions of old oceanic crust, and high attenuation along mid-ocean ridges and backarcs. Below 250 km, we observe strong attenuation in the southwestern Pacific and eastern Africa, while low attenuation zones fade beneath most of the cratons. The strong negative correlation of Q^{-1}_μ and VS anomalies at shallow upper-mantle depths points to a common dominant origin for the two, likely due to variations in thermal structure. A comparison with two other global upper-mantle attenuation models shows promising consistency. As we updated the elastic 3-D model in alternate iterations, we found that the VS part of the model was stable, while the ξ structure evolution was more pronounced, indicating that it may be important to include 3-D attenuation effects when inverting for ξ, possibly due to the influence of dispersion corrections on this less well-constrained parameter.
Fluctuations around equilibrium laws in ergodic continuous-time random walks.
Schulz, Johannes H P; Barkai, Eli
2015-06-01
We study occupation time statistics in ergodic continuous-time random walks. Under thermal detailed balance conditions, the average occupation time is given by the Boltzmann-Gibbs canonical law. But close to the nonergodic phase, the finite-time fluctuations around this mean are large and nontrivial. They exhibit dual time scaling and distribution laws: the infinite density of large fluctuations complements the Lévy-stable density of bulk fluctuations. Neither of the two should be interpreted as a stand-alone limiting law, as each has its own deficiency: the infinite density has an infinite norm (despite particle conservation), while the stable distribution has an infinite variance (although occupation times are bounded). These unphysical divergences are remedied by consistent use and interpretation of both formulas. Interestingly, while the system's canonical equilibrium laws naturally determine the mean occupation time of the ergodic motion, they also control the infinite and Lévy-stable densities of fluctuations. The duality of stable and infinite densities is in fact ubiquitous for these dynamics, as it concerns the time averages of general physical observables.
On the Wigner law in dilute random matrices
NASA Astrophysics Data System (ADS)
Khorunzhy, A.; Rodgers, G. J.
1998-12-01
We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.
NASA Astrophysics Data System (ADS)
Rosenberg, D. E.; Alafifi, A.
2016-12-01
Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.
NASA Astrophysics Data System (ADS)
Ehsani, Amir Houshang; Quiel, Friedrich
2009-02-01
In this paper, we demonstrate artificial neural networks—self-organizing map (SOM)—as a semi-automatic method for extraction and analysis of landscape elements in the man and biosphere reserve "Eastern Carpathians". The Shuttle Radar Topography Mission (SRTM) collected data to produce generally available digital elevation models (DEM). Together with Landsat Thematic Mapper data, this provides a unique, consistent and nearly worldwide data set. To integrate the DEM with Landsat data, it was re-projected from geographic coordinates to UTM with 28.5 m spatial resolution using cubic convolution interpolation. To provide quantitative morphometric parameters, first-order (slope) and second-order derivatives of the DEM—minimum curvature, maximum curvature and cross-sectional curvature—were calculated by fitting a bivariate quadratic surface with a window size of 9×9 pixels. These surface curvatures are strongly related to landform features and geomorphological processes. Four morphometric parameters and seven Landsat-enhanced thematic mapper (ETM+) bands were used as input for the SOM algorithm. Once the network weights have been randomly initialized, different learning parameter sets, e.g. initial radius, final radius and number of iterations, were investigated. An optimal SOM with 20 classes using 1000 iterations and a final neighborhood radius of 0.05 provided a low average quantization error of 0.3394 and was used for further analysis. The effect of randomization of initial weights for optimal SOM was also studied. Feature space analysis, three-dimensional inspection and auxiliary data facilitated the assignment of semantic meaning to the output classes in terms of landform, based on morphometric analysis, and land use, based on spectral properties. Results were displayed as thematic map of landscape elements according to form, cover and slope. Spectral and morphometric signature analysis with corresponding zoom samples superimposed by contour lines were compared in detail to clarify the role of morphometric parameters to separate landscape elements. The results revealed the efficiency of SOM to integrate SRTM and Landsat data in landscape analysis. Despite the stochastic nature of SOM, the results in this particular study are not sensitive to randomization of initial weight vectors if many iterations are used. This procedure is reproducible for the same application with consistent results.
An iterative synthetic approach to engineer a high-performing PhoB-specific reporter.
Stoudenmire, Julie L; Essock-Burns, Tara; Weathers, Erena N; Solaimanpour, Sina; Mrázek, Jan; Stabb, Eric V
2018-05-11
Transcriptional reporters are common tools for analyzing either the transcription of a gene of interest or the activity of a specific transcriptional regulator. Unfortunately, the latter application has the shortcoming that native promoters did not evolve as optimal readouts for the activity of a particular regulator. We sought to synthesize an optimized transcriptional reporter for assessing PhoB activity, aiming for maximal "on" expression when PhoB is active, minimal background in the "off" state, and no control elements for other regulators. We designed specific sequences for promoter elements with appropriately spaced PhoB-binding sites, and at nineteen additional intervening nucleotide positions for which we did not predict sequence-specific effects the bases were randomized. Eighty-three such constructs were screened in Vibrio fischeri , enabling us to identify bases at particular randomized positions that significantly correlated with high "on" or low "off" expression. A second round of promoter design rationally constrained thirteen additional positions, leading to a reporter with high PhoB-dependent expression, essentially no background, and no other known regulatory elements. As expressed reporters, we used both stable and destabilized GFP, the latter with a half-life of eighty-one minutes in V. fischeri In culture, PhoB induced the reporter when phosphate was depleted below 10 μM. During symbiotic colonization of its host squid Euprymna scolopes , the reporter indicated heterogeneous phosphate availability in different light-organ microenvironments. Finally, testing this construct in other Proteobacteria demonstrated its broader utility. The results illustrate how a limited ability to predict synthetic promoter-reporter performance can be overcome through iterative screening and re-engineering. IMPORTANCE Transcriptional reporters can be powerful tools for assessing when a particular regulator is active; however, native promoters may not be ideal for this purpose. Optimal reporters should be specific to the regulator being examined and should maximize the difference between "on" and "off" states; however, these properties are distinct from the selective pressures driving the evolution of natural promoters. Synthetic promoters offer a promising alternative, but our understanding often does not enable fully predictive promoter design, and the large number of alternative sequence possibilities can be intractable. In a synthetic promoter region with over thirty-four billion sequence variants, we identified bases correlated with favorable performance by screening only eighty-three candidates, allowing us to rationally constrain our design. We thereby generated an optimized reporter that is induced by PhoB and used it to explore the low-phosphate response of V. fischeri This promoter-design strategy will facilitate the engineering of other regulator-specific reporters. Copyright © 2018 American Society for Microbiology.
Real-time inextensible surgical thread simulation.
Xu, Lang; Liu, Qian
2018-03-27
This paper discusses a real-time simulation method of inextensible surgical thread based on the Cosserat rod theory using position-based dynamics (PBD). The method realizes stable twining and knotting of surgical thread while including inextensibility, bending, twisting and coupling effects. The Cosserat rod theory is used to model the nonlinear elastic behavior of surgical thread. The surgical thread model is solved with PBD to achieve a real-time, extremely stable simulation. Due to the one-dimensional linear structure of surgical thread, the direct solution of the distance constraint based on tridiagonal matrix algorithm is used to enhance stretching resistance in every constraint projection iteration. In addition, continuous collision detection and collision response guarantee a large time step and high performance. Furthermore, friction is integrated into the constraint projection process to stabilize the twining of multiple threads and complex contact situations. Through comparisons with existing methods, the surgical thread maintains constant length under large deformation after applying the direct distance constraint in our method. The twining and knotting of multiple threads correspond to stable solutions to contact and friction forces. A surgical suture scene is also modeled to demonstrate the practicality and simplicity of our method. Our method achieves stable and fast simulation of inextensible surgical thread. Benefiting from the unified particle framework, the rigid body, elastic rod, and soft body can be simultaneously simulated. The method is appropriate for applications in virtual surgery that require multiple dynamic bodies.
SAR-based change detection using hypothesis testing and Markov random field modelling
NASA Astrophysics Data System (ADS)
Cao, W.; Martinis, S.
2015-04-01
The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Kausik, E-mail: kausik.chatterjee@aggiemail.usu.edu; Center for Atmospheric and Space Sciences, Utah State University, Logan, UT 84322; Roadcap, John R., E-mail: john.roadcap@us.af.mil
The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson–Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals ofmore » the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.« less
NASA Astrophysics Data System (ADS)
Chatterjee, Kausik; Roadcap, John R.; Singh, Surendra
2014-11-01
The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson-Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals of the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.
Vachha, Behroze; Brodoefel, Harald; Wilcox, Carol; Hackney, David B; Moonis, Gul
2013-12-01
To compare objective and subjective image quality in neck CT images acquired at different tube current-time products (275 mAs and 340 mAs) and reconstructed with filtered-back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR). HIPAA-compliant study with IRB approval and waiver of informed consent. 66 consecutive patients were randomly assigned to undergo contrast-enhanced neck CT at a standard tube-current-time-product (340 mAs; n = 33) or reduced tube-current-time-product (275 mAs, n = 33). Data sets were reconstructed with FBP and 2 levels (30%, 40%) of ASIR-FBP blending at 340 mAs and 275 mAs. Two neuroradiologists assessed subjective image quality in a blinded and randomized manner. Volume CT dose index (CTDIvol), dose-length-product (DLP), effective dose, and objective image noise were recorded. Signal-to-noise ratio (SNR) was computed as mean attenuation in a region of interest in the sternocleidomastoid muscle divided by image noise. Compared with FBP, ASIR resulted in a reduction of image noise at both 340 mAs and 275 mAs. Reduction of tube current from 340 mAs to 275 mAs resulted in an increase in mean objective image noise (p=0.02) and a decrease in SNR (p = 0.03) when images were reconstructed with FBP. However, when the 275 mAs images were reconstructed using ASIR, the mean objective image noise and SNR were similar to those of the standard 340 mAs CT images reconstructed with FBP (p>0.05). Subjective image noise was ranked by both raters as either average or less-than-average irrespective of the tube current and iterative reconstruction technique. Adapting ASIR into neck CT protocols reduced effective dose by 17% without compromising image quality. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard; Felus, Yaron A.
2008-06-01
The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.
a Weighted Closed-Form Solution for Rgb-D Data Registration
NASA Astrophysics Data System (ADS)
Vestena, K. M.; Dos Santos, D. R.; Oilveira, E. M., Jr.; Pavan, N. L.; Khoshelham, K.
2016-06-01
Existing 3D indoor mapping of RGB-D data are prominently point-based and feature-based methods. In most cases iterative closest point (ICP) and its variants are generally used for pairwise registration process. Considering that the ICP algorithm requires an relatively accurate initial transformation and high overlap a weighted closed-form solution for RGB-D data registration is proposed. In this solution, we weighted and normalized the 3D points based on the theoretical random errors and the dual-number quaternions are used to represent the 3D rigid body motion. Basically, dual-number quaternions provide a closed-form solution by minimizing a cost function. The most important advantage of the closed-form solution is that it provides the optimal transformation in one-step, it does not need to calculate good initial estimates and expressively decreases the demand for computer resources in contrast to the iterative method. Basically, first our method exploits RGB information. We employed a scale invariant feature transformation (SIFT) for extracting, detecting, and matching features. It is able to detect and describe local features that are invariant to scaling and rotation. To detect and filter outliers, we used random sample consensus (RANSAC) algorithm, jointly with an statistical dispersion called interquartile range (IQR). After, a new RGB-D loop-closure solution is implemented based on the volumetric information between pair of point clouds and the dispersion of the random errors. The loop-closure consists to recognize when the sensor revisits some region. Finally, a globally consistent map is created to minimize the registration errors via a graph-based optimization. The effectiveness of the proposed method is demonstrated with a Kinect dataset. The experimental results show that the proposed method can properly map the indoor environment with an absolute accuracy around 1.5% of the travel of a trajectory.
Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems
2003-06-01
167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are
Use of LANDSAT imagery for wildlife habitat mapping in northeast and east central Alaska
NASA Technical Reports Server (NTRS)
Lent, P. C. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Two scenes were analyzed by applying an iterative cluster analysis to a 2% random data sample and then using the resulting clusters as a training set basis for maximum likelihood classification. Twenty-six and twenty-seven categorical classes, respectively resulted from this process. The majority of classes in each case were quite specific vegetation types; each of these types has specific value as moose habitat.
Retrieval of constituent mixing ratios from limb thermal emission spectra
NASA Technical Reports Server (NTRS)
Shaffer, William A.; Kunde, Virgil G.; Conrath, Barney J.
1988-01-01
An onion-peeling iterative, least-squares relaxation method to retrieve mixing ratio profiles from limb thermal emission spectra is presented. The method has been tested on synthetic data, containing various amounts of added random noise for O3, HNO3, and N2O. The retrieval method is used to obtain O3 and HNO3 mixing ratio profiles from high-resolution thermal emission spectra. Results of the retrievals compare favorably with those obtained previously.
Efficient robust conditional random fields.
Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A
2015-10-01
Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.
High internal inductance for steady-state operation in ITER and a reactor
Ferron, John R.; Holcomb, Christopher T.; Luce, Timothy C.; ...
2015-06-26
Increased confinement and ideal stability limits at relatively high values of the internal inductance (more » $${{\\ell}_{i}}$$ ) have enabled an attractive scenario for steady-state tokamak operation to be demonstrated in DIII-D. Normalized plasma pressure in the range appropriate for a reactor has been achieved in high elongation and triangularity double-null divertor discharges with $${{\\beta}_{\\text{N}}}\\approx 5$$ at $${{\\ell}_{i}}\\approx 1.3$$ , near the ideal $n=1$ kink stability limit calculated without the effect of a stabilizing vacuum vessel wall, with the ideal-wall limit still higher at $${{\\beta}_{\\text{N}}}>5.5$$ . Confinement is above the H-mode level with $${{H}_{98\\left(\\text{y},2\\right)}}\\approx 1.8$$ . At $${{q}_{95}}\\approx 7.5$$ , the current is overdriven, with bootstrap current fraction $${{f}_{\\text{BS}}}\\approx 0.8$$ , noninductive current fraction $${{f}_{\\text{NI}}}>1$$ and negative surface voltage. For ITER (which has a single-null divertor shape), operation at $${{\\ell}_{i}}\\approx 1$$ is a promising option with $${{f}_{\\text{BS}}}\\approx 0.5$$ and the remaining current driven externally near the axis where the electron cyclotron current drive efficiency is high. This scenario has been tested in the ITER shape in DIII-D at $${{q}_{95}}=4.8$$ , so far reaching $${{f}_{\\text{NI}}}=0.7$$ and $${{f}_{\\text{BS}}}=0.4$$ at $${{\\beta}_{\\text{N}}}\\approx 3.5$$ with performance appropriate for the ITER Q=5 mission, $${{H}_{89}}{{\\beta}_{\\text{N}}}/q_{95}^{2}\\approx 0.3$$ . Modeling studies explored how increased current drive power for DIII-D could be applied to maintain a stationary, fully noninductive high $${{\\ell}_{i}}$$ discharge. Lastly, stable solutions in the double-null shape are found without the vacuum vessel wall at $${{\\beta}_{\\text{N}}}=4$$ , $${{\\ell}_{i}}=1.07$$ and $${{f}_{\\text{BS}}}=0.5$$ , and at $${{\\beta}_{\\text{N}}}=5$$ with the vacuum vessel wall.« less
NASA Astrophysics Data System (ADS)
Citro, V.; Luchini, P.; Giannetti, F.; Auteri, F.
2017-09-01
The study of the stability of a dynamical system described by a set of partial differential equations (PDEs) requires the computation of unstable states as the control parameter exceeds its critical threshold. Unfortunately, the discretization of the governing equations, especially for fluid dynamic applications, often leads to very large discrete systems. As a consequence, matrix based methods, like for example the Newton-Raphson algorithm coupled with a direct inversion of the Jacobian matrix, lead to computational costs too large in terms of both memory and execution time. We present a novel iterative algorithm, inspired by Krylov-subspace methods, which is able to compute unstable steady states and/or accelerate the convergence to stable configurations. Our new algorithm is based on the minimization of the residual norm at each iteration step with a projection basis updated at each iteration rather than at periodic restarts like in the classical GMRES method. The algorithm is able to stabilize any dynamical system without increasing the computational time of the original numerical procedure used to solve the governing equations. Moreover, it can be easily inserted into a pre-existing relaxation (integration) procedure with a call to a single black-box subroutine. The procedure is discussed for problems of different sizes, ranging from a small two-dimensional system to a large three-dimensional problem involving the Navier-Stokes equations. We show that the proposed algorithm is able to improve the convergence of existing iterative schemes. In particular, the procedure is applied to the subcritical flow inside a lid-driven cavity. We also discuss the application of Boostconv to compute the unstable steady flow past a fixed circular cylinder (2D) and boundary-layer flow over a hemispherical roughness element (3D) for supercritical values of the Reynolds number. We show that Boostconv can be used effectively with any spatial discretization, be it a finite-difference, finite-volume, finite-element or spectral method.
A new approach to blind deconvolution of astronomical images
NASA Astrophysics Data System (ADS)
Vorontsov, S. V.; Jefferies, S. M.
2017-05-01
We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, F.; Banks, J. W.; Henshaw, W. D.
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
NASA Astrophysics Data System (ADS)
Baylor, L. R.
2012-10-01
Deuterium pellet injection was used on the DIII-D tokamak to successfully demonstrate for the first time the on-demand triggering of ELMs at a 10x higher rate, and with much smaller intensity, than natural edge localized modes (ELMs). The triggering of small ELMs by high frequency pellet injection has been proposed as a method to prevent large ELMs that can erode the ITER plasma facing components [1]. The demonstration was made by injecting slow (<200 m/s) 1.3 mm diameter deuterium pellets at 60 Hz from the low field side in an ITER similar plasma with 5 Hz natural ELM frequency. The input power was only slightly above the H-mode threshold. Similar non-pellet discharges had ELM energy losses up to 55 kJ (˜8% of total stored energy), while the case with pellets demonstrated ELMs with an average energy loss less than 3 kJ (<1% of the total). Total divertor ELM heat flux was reduced by more than a factor of 10. Central accumulation of Ni was significantly reduced in the pellet triggered ELM case. No significant increase in density or decrease in energy confinement was observed. Stability analysis of these discharges shows that the pedestal parameters are approaching the peeling unstable region just before a natural ELM crash. In the rapid pellet small ELM case, the pedestal conditions are well within the stable region with a narrower pedestal width observed. This narrower width is consistent with a picture in which the pellets are triggering the ELMs before the width expands to the critical ELM width. Nonlinear MHD simulations of the pellet ELM triggering show destabilization of ballooning modes by a local pressure perturbation. The implications of these results for pellet ELM pacing in ITER will be discussed. 6pt [1] P.T. Lang et al., Nucl. Fusion 44, 665 (2004).
Active control for stabilization of neoclassical tearing modesa)
NASA Astrophysics Data System (ADS)
Humphreys, D. A.; Ferron, J. R.; La Haye, R. J.; Luce, T. C.; Petty, C. C.; Prater, R.; Welander, A. S.
2006-05-01
This work describes active control algorithms used by DIII-D [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] to stabilize and maintain suppression of 3/2 or 2/1 neoclassical tearing modes (NTMs) by application of electron cyclotron current drive (ECCD) at the rational q surface. The DIII-D NTM control system can determine the correct q-surface/ECCD alignment and stabilize existing modes within 100-500ms of activation, or prevent mode growth with preemptive application of ECCD, in both cases enabling stable operation at normalized beta values above 3.5. Because NTMs can limit performance or cause plasma-terminating disruptions in tokamaks, their stabilization is essential to the high performance operation of ITER [R. Aymar et al., ITER Joint Central Team, ITER Home Teams, Nucl. Fusion 41, 1301 (2001)]. The DIII-D NTM control system has demonstrated many elements of an eventual ITER solution, including general algorithms for robust detection of q-surface/ECCD alignment and for real-time maintenance of alignment following the disappearance of the mode. This latter capability, unique to DIII-D, is based on real-time reconstruction of q-surface geometry by a Grad-Shafranov solver using external magnetics and internal motional Stark effect measurements. Alignment is achieved by varying either the plasma major radius (and the rational q surface) or the toroidal field (and the deposition location). The requirement to achieve and maintain q-surface/ECCD alignment with accuracy on the order of 1cm is routinely met by the DIII-D Plasma Control System and these algorithms. We discuss the integrated plasma control design process used for developing these and other general control algorithms, which includes physics-based modeling and testing of the algorithm implementation against simulations of actuator and plasma responses. This systematic design/test method and modeling environment enabled successful mode suppression by the NTM control system upon first-time use in an experimental discharge.
Benchmarking kinetic calculations of resistive wall mode stability
NASA Astrophysics Data System (ADS)
Berkery, J. W.; Liu, Y. Q.; Wang, Z. R.; Sabbagh, S. A.; Logan, N. C.; Park, J.-K.; Manickam, J.; Betti, R.
2014-05-01
Validating the calculations of kinetic resistive wall mode (RWM) stability is important for confidently predicting RWM stable operating regions in ITER and other high performance tokamaks for disruption avoidance. Benchmarking the calculations of the Magnetohydrodynamic Resistive Spectrum—Kinetic (MARS-K) [Y. Liu et al., Phys. Plasmas 15, 112503 (2008)], Modification to Ideal Stability by Kinetic effects (MISK) [B. Hu et al., Phys. Plasmas 12, 057301 (2005)], and Perturbed Equilibrium Nonambipolar Transport PENT) [N. Logan et al., Phys. Plasmas 20, 122507 (2013)] codes for two Solov'ev analytical equilibria and a projected ITER equilibrium has demonstrated good agreement between the codes. The important particle frequencies, the frequency resonance energy integral in which they are used, the marginally stable eigenfunctions, perturbed Lagrangians, and fluid growth rates are all generally consistent between the codes. The most important kinetic effect at low rotation is the resonance between the mode rotation and the trapped thermal particle's precession drift, and MARS-K, MISK, and PENT show good agreement in this term. The different ways the rational surface contribution was treated historically in the codes is identified as a source of disagreement in the bounce and transit resonance terms at higher plasma rotation. Calculations from all of the codes support the present understanding that RWM stability can be increased by kinetic effects at low rotation through precession drift resonance and at high rotation by bounce and transit resonances, while intermediate rotation can remain susceptible to instability. The applicability of benchmarked kinetic stability calculations to experimental results is demonstrated by the prediction of MISK calculations of near marginal growth rates for experimental marginal stability points from the National Spherical Torus Experiment (NSTX) [M. Ono et al., Nucl. Fusion 40, 557 (2000)].
Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars
2013-01-01
Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353
21 CFR 343.80 - Professional labeling.
Code of Federal Regulations, 2013 CFR
2013-04-01
..., randomized, multi-center, placebo-controlled trials of predominantly male post-MI subjects and one randomized... group on the aspirin molecule. This acetyl group is responsible for the inactivation of cyclo-oxygenase... event rate was reduced to 5 percent from the 10 percent rate in the placebo group. Chronic Stable Angina...
21 CFR 343.80 - Professional labeling.
Code of Federal Regulations, 2012 CFR
2012-04-01
..., randomized, multi-center, placebo-controlled trials of predominantly male post-MI subjects and one randomized... group on the aspirin molecule. This acetyl group is responsible for the inactivation of cyclo-oxygenase... event rate was reduced to 5 percent from the 10 percent rate in the placebo group. Chronic Stable Angina...
21 CFR 343.80 - Professional labeling.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., randomized, multi-center, placebo-controlled trials of predominantly male post-MI subjects and one randomized... group on the aspirin molecule. This acetyl group is responsible for the inactivation of cyclo-oxygenase... event rate was reduced to 5 percent from the 10 percent rate in the placebo group. Chronic Stable Angina...
21 CFR 343.80 - Professional labeling.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., randomized, multi-center, placebo-controlled trials of predominantly male post-MI subjects and one randomized... group on the aspirin molecule. This acetyl group is responsible for the inactivation of cyclo-oxygenase... event rate was reduced to 5 percent from the 10 percent rate in the placebo group. Chronic Stable Angina...
21 CFR 343.80 - Professional labeling.
Code of Federal Regulations, 2011 CFR
2011-04-01
..., randomized, multi-center, placebo-controlled trials of predominantly male post-MI subjects and one randomized... group on the aspirin molecule. This acetyl group is responsible for the inactivation of cyclo-oxygenase... event rate was reduced to 5 percent from the 10 percent rate in the placebo group. Chronic Stable Angina...
Xu, Rong; Supekar, Kaustubh; Morgan, Alex; Das, Amar; Garber, Alan
2008-11-06
Concept specific lexicons (e.g. diseases, drugs, anatomy) are a critical source of background knowledge for many medical language-processing systems. However, the rapid pace of biomedical research and the lack of constraints on usage ensure that such dictionaries are incomplete. Focusing on disease terminology, we have developed an automated, unsupervised, iterative pattern learning approach for constructing a comprehensive medical dictionary of disease terms from randomized clinical trial (RCT) abstracts, and we compared different ranking methods for automatically extracting con-textual patterns and concept terms. When used to identify disease concepts from 100 randomly chosen, manually annotated clinical abstracts, our disease dictionary shows significant performance improvement (F1 increased by 35-88%) over available, manually created disease terminologies.
Xu, Rong; Supekar, Kaustubh; Morgan, Alex; Das, Amar; Garber, Alan
2008-01-01
Concept specific lexicons (e.g. diseases, drugs, anatomy) are a critical source of background knowledge for many medical language-processing systems. However, the rapid pace of biomedical research and the lack of constraints on usage ensure that such dictionaries are incomplete. Focusing on disease terminology, we have developed an automated, unsupervised, iterative pattern learning approach for constructing a comprehensive medical dictionary of disease terms from randomized clinical trial (RCT) abstracts, and we compared different ranking methods for automatically extracting contextual patterns and concept terms. When used to identify disease concepts from 100 randomly chosen, manually annotated clinical abstracts, our disease dictionary shows significant performance improvement (F1 increased by 35–88%) over available, manually created disease terminologies. PMID:18999169
van Rossum, Joris
2006-01-01
In its essence, the explanatory potential of the theory of natural selection is based on the iterative process of random production and variation, and subsequent non-random, directive selection. It is shown that within this explanatory framework, there is no place for the explanation of sexual reproduction. Thus in Darwinistic literature, sexual reproduction - one of nature's most salient characteristics - is often either assumed or ignored, but not explained. This fundamental and challenging gap within a complete naturalistic understanding of living beings calls for the need of a cybernetic account for sexual reproduction, meaning an understanding of the dynamic and creative potential of living beings to continuously and autonomously produce new organisms with unique and specific constellations.
General Exact Solution to the Problem of the Probability Density for Sums of Random Variables
NASA Astrophysics Data System (ADS)
Tribelsky, Michael I.
2002-07-01
The exact explicit expression for the probability density pN(x) for a sum of N random, arbitrary correlated summands is obtained. The expression is valid for any number N and any distribution of the random summands. Most attention is paid to application of the developed approach to the case of independent and identically distributed summands. The obtained results reproduce all known exact solutions valid for the, so called, stable distributions of the summands. It is also shown that if the distribution is not stable, the profile of pN(x) may be divided into three parts, namely a core (small x), a tail (large x), and a crossover from the core to the tail (moderate x). The quantitative description of all three parts as well as that for the entire profile is obtained. A number of particular examples are considered in detail.
General exact solution to the problem of the probability density for sums of random variables.
Tribelsky, Michael I
2002-08-12
The exact explicit expression for the probability density p(N)(x) for a sum of N random, arbitrary correlated summands is obtained. The expression is valid for any number N and any distribution of the random summands. Most attention is paid to application of the developed approach to the case of independent and identically distributed summands. The obtained results reproduce all known exact solutions valid for the, so called, stable distributions of the summands. It is also shown that if the distribution is not stable, the profile of p(N)(x) may be divided into three parts, namely a core (small x), a tail (large x), and a crossover from the core to the tail (moderate x). The quantitative description of all three parts as well as that for the entire profile is obtained. A number of particular examples are considered in detail.
Lee, Sangyun; Kwon, Heejin; Cho, Jihan
2016-12-01
To investigate image quality characteristics of abdominal computed tomography (CT) scans reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) vs currently using applied adaptive statistical iterative reconstruction (ASIR). This institutional review board-approved study included 35 consecutive patients who underwent CT of the abdomen. Among these 35 patients, 27 with focal liver lesions underwent abdomen CT with a 128-slice multidetector unit using the following parameters: fixed noise index of 30, 1.25 mm slice thickness, 120 kVp, and a gantry rotation time of 0.5 seconds. CT images were analyzed depending on the method of reconstruction: ASIR (30%, 50%, and 70%) vs ASIR-V (30%, 50%, and 70%). Three radiologists independently assessed randomized images in a blinded manner. Imaging sets were compared to focal lesion detection numbers, overall image quality, and objective noise with a paired sample t test. Interobserver agreement was assessed with the intraclass correlation coefficient. The detection of small focal liver lesions (<10 mm) was significantly higher when ASIR-V was used when compared to ASIR (P <0.001). Subjective image noise, artifact, and objective image noise in liver were generally significantly better for ASIR-V compared to ASIR, especially in 50% ASIR-V. Image sharpness and diagnostic acceptability were significantly worse in 70% ASIR-V compared to various levels of ASIR. Images analyzed using 50% ASIR-V were significantly better than three different series of ASIR or other ASIR-V conditions at providing diagnostically acceptable CT scans without compromising image quality and in the detection of focal liver lesions. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Effectiveness of Ivabradine in Treating Stable Angina Pectoris.
Ye, Liwen; Ke, Dazhi; Chen, Qingwei; Li, Guiqiong; Deng, Wei; Wu, Zhiqin
2016-04-01
Many studies show that ivabradine is effective for stable angina.This meta-analysis was performed to determine the effect of treatment duration and control group type on ivabradine efficacy in stable angina pectoris.Relevant articles in the English language in the PUBMED and EMBASE databases and related websites were identified by using the search terms "ivabradine," "angina," "randomized controlled trials," and "Iva." The final search date was November 2, 2015.Articles were included if they were published randomized controlled trials that related to ivabradine treatment of stable angina pectoris.Patients with stable angina pectoris were included.The patients were classified according to treatment duration (<3 vs ≥3 months) or type of control group (placebo vs beta-receptor blocker). Angina outcomes were heart rate at rest or peak, exercise duration, and time to angina onset.Seven articles were selected. There were 3747 patients: 2100 and 1647 were in the ivabradine and control groups, respectively. The ivabradine group had significantly longer exercise duration when they had been treated for at least 3 months, but not when treatment time was less than 3 months. Ivabradine significantly improved time to angina onset regardless of treatment duration. Control group type did not influence the effect of exercise duration (significant) or time to angina onset (significant).Compared with beta-blocker and placebo, ivabradine improved exercise duration and time to onset of angina in patients with stable angina. However, its ability to improve exercise duration only became significant after at least 3 months of treatment.
Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha
2007-09-01
The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.
Cognitive Jointly Optimal Code-Division Channelization and Routing Over Cooperative Links
2014-04-01
i List of Figures Fig. 1: Comparison between code-division channelization and FDM. Fig. 2: Secondary receiver SINR as a function of the iteration step...transmission percentage as a function of the number of active links under Cases rank(X′′) = 1 and > 1 (the study includes also the random code assignment...scheme); (b) Instantaneous output SINR of a primary signal against primary SINR-QoS threshold SINRthPU (thick line) and instanta- neous output SINR of
Statistical Mechanics of Combinatorial Auctions
NASA Astrophysics Data System (ADS)
Galla, Tobias; Leone, Michele; Marsili, Matteo; Sellitto, Mauro; Weigt, Martin; Zecchina, Riccardo
2006-09-01
Combinatorial auctions are formulated as frustrated lattice gases on sparse random graphs, allowing the determination of the optimal revenue by methods of statistical physics. Transitions between computationally easy and hard regimes are found and interpreted in terms of the geometric structure of the space of solutions. We introduce an iterative algorithm to solve intermediate and large instances, and discuss competing states of optimal revenue and maximal number of satisfied bidders. The algorithm can be generalized to the hard phase and to more sophisticated auction protocols.
A modified dodge algorithm for the parabolized Navier-Stokes equations and compressible duct flows
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1981-01-01
A revised version of a split-velocity method for numerical calculation of compressible duct flow was developed. The revision incorporates balancing of mass flow rates on each marching step in order to maintain front-to-back continuity during the calculation. The (checkerboard) zebra algorithm is applied to solution of the three-dimensional continuity equation in conservative form. A second-order A-stable linear multistep method is employed in effecting a marching solution of the parabolized momentum equations. A checkerboard successive overrelaxation iteration is used to solve the resulting implicit nonlinear systems of finite-difference equations which govern stepwise transition.
Some estimation formulae for continuous time-invariant linear systems
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Sidhu, G. S.
1975-01-01
In this brief paper we examine a Riccati equation decomposition due to Reid and Lainiotis and apply the result to the continuous time-invariant linear filtering problem. Exploitation of the time-invariant structure leads to integration-free covariance recursions which are of use in covariance analyses and in filter implementations. A super-linearly convergent iterative solution to the algebraic Riccati equation (ARE) is developed. The resulting algorithm, arranged in a square-root form, is thought to be numerically stable and competitive with other ARE solution methods. Certain covariance relations that are relevant to the fixed-point and fixed-lag smoothing problems are also discussed.
NASA Astrophysics Data System (ADS)
Zhou, Hai-Jun
2016-04-01
Rock-Paper-Scissors (RPS), a game of cyclic dominance, is not merely a popular children's game but also a basic model system for studying decision-making in non-cooperative strategic interactions. Aimed at students of physics with no background in game theory, this paper introduces the concepts of Nash equilibrium and evolutionarily stable strategy, and reviews some recent theoretical and empirical efforts on the non-equilibrium properties of the iterated RPS, including collective cycling, conditional response patterns and microscopic mechanisms that facilitate cooperation. We also introduce several dynamical processes to illustrate the applications of RPS as a simplified model of species competition in ecological systems and price cycling in economic markets.
Finite-size effects and switching times for Moran process with mutation.
DeVille, Lee; Galiardi, Meghan
2017-04-01
We consider the Moran process with two populations competing under an iterated Prisoner's Dilemma in the presence of mutation, and concentrate on the case where there are multiple evolutionarily stable strategies. We perform a complete bifurcation analysis of the deterministic system which arises in the infinite population size. We also study the Master equation and obtain asymptotics for the invariant distribution and metastable switching times for the stochastic process in the case of large but finite population. We also show that the stochastic system has asymmetries in the form of a skew for parameter values where the deterministic limit is symmetric.
NASA Technical Reports Server (NTRS)
Cole, H. A., Jr.
1973-01-01
Random decrement signatures of structures vibrating in a random environment are studied through use of computer-generated and experimental data. Statistical properties obtained indicate that these signatures are stable in form and scale and hence, should have wide application in one-line failure detection and damping measurement. On-line procedures are described and equations for estimating record-length requirements to obtain signatures of a prescribed precision are given.
Multiple Scattering in Random Mechanical Systems and Diffusion Approximation
NASA Astrophysics Data System (ADS)
Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun
2013-10-01
This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.
Discriminating between Light- and Heavy-Tailed Distributions with Limit Theorem.
Burnecki, Krzysztof; Wylomanska, Agnieszka; Chechkin, Aleksei
2015-01-01
In this paper we propose an algorithm to distinguish between light- and heavy-tailed probability laws underlying random datasets. The idea of the algorithm, which is visual and easy to implement, is to check whether the underlying law belongs to the domain of attraction of the Gaussian or non-Gaussian stable distribution by examining its rate of convergence. The method allows to discriminate between stable and various non-stable distributions. The test allows to differentiate between distributions, which appear the same according to standard Kolmogorov-Smirnov test. In particular, it helps to distinguish between stable and Student's t probability laws as well as between the stable and tempered stable, the cases which are considered in the literature as very cumbersome. Finally, we illustrate the procedure on plasma data to identify cases with so-called L-H transition.
Discriminating between Light- and Heavy-Tailed Distributions with Limit Theorem
Chechkin, Aleksei
2015-01-01
In this paper we propose an algorithm to distinguish between light- and heavy-tailed probability laws underlying random datasets. The idea of the algorithm, which is visual and easy to implement, is to check whether the underlying law belongs to the domain of attraction of the Gaussian or non-Gaussian stable distribution by examining its rate of convergence. The method allows to discriminate between stable and various non-stable distributions. The test allows to differentiate between distributions, which appear the same according to standard Kolmogorov–Smirnov test. In particular, it helps to distinguish between stable and Student’s t probability laws as well as between the stable and tempered stable, the cases which are considered in the literature as very cumbersome. Finally, we illustrate the procedure on plasma data to identify cases with so-called L-H transition. PMID:26698863
Disk Density Tuning of a Maximal Random Packing
Ebeida, Mohamed S.; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong-Ming; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.
2016-01-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations. PMID:27563162
Functional mixed effects spectral analysis
KRAFTY, ROBERT T.; HALL, MARTICA; GUO, WENSHENG
2011-01-01
SUMMARY In many experiments, time series data can be collected from multiple units and multiple time series segments can be collected from the same unit. This article introduces a mixed effects Cramér spectral representation which can be used to model the effects of design covariates on the second-order power spectrum while accounting for potential correlations among the time series segments collected from the same unit. The transfer function is composed of a deterministic component to account for the population-average effects and a random component to account for the unit-specific deviations. The resulting log-spectrum has a functional mixed effects representation where both the fixed effects and random effects are functions in the frequency domain. It is shown that, when the replicate-specific spectra are smooth, the log-periodograms converge to a functional mixed effects model. A data-driven iterative estimation procedure is offered for the periodic smoothing spline estimation of the fixed effects, penalized estimation of the functional covariance of the random effects, and unit-specific random effects prediction via the best linear unbiased predictor. PMID:26855437
Disk Density Tuning of a Maximal Random Packing.
Ebeida, Mohamed S; Rushdi, Ahmad A; Awad, Muhammad A; Mahmoud, Ahmed H; Yan, Dong-Ming; English, Shawn A; Owens, John D; Bajaj, Chandrajit L; Mitchell, Scott A
2016-08-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.
Portable and Error-Free DNA-Based Data Storage.
Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica
2017-07-10
DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.
CRISPR/Cas9-coupled recombineering for metabolic engineering of Corynebacterium glutamicum.
Cho, Jae Sung; Choi, Kyeong Rok; Prabowo, Cindy Pricilia Surya; Shin, Jae Ho; Yang, Dongsoo; Jang, Jaedong; Lee, Sang Yup
2017-07-01
Genome engineering of Corynebacterium glutamicum, an important industrial microorganism for amino acids production, currently relies on random mutagenesis and inefficient double crossover events. Here we report a rapid genome engineering strategy to scarlessly knock out one or more genes in C. glutamicum in sequential and iterative manner. Recombinase RecT is used to incorporate synthetic single-stranded oligodeoxyribonucleotides into the genome and CRISPR/Cas9 to counter-select negative mutants. We completed the system by engineering the respective plasmids harboring CRISPR/Cas9 and RecT for efficient curing such that multiple gene targets can be done iteratively and final strains will be free of plasmids. To demonstrate the system, seven different mutants were constructed within two weeks to study the combinatorial deletion effects of three different genes on the production of γ-aminobutyric acid, an industrially relevant chemical of much interest. This genome engineering strategy will expedite metabolic engineering of C. glutamicum. Copyright © 2017 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Extended Lagrangian Excited State Molecular Dynamics
Bjorgaard, Josiah August; Sheppard, Daniel Glen; Tretiak, Sergei; ...
2018-01-09
In this work, an extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born–Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both formore » the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. In conclusion, the XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree–Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).« less
Performance study of LMS based adaptive algorithms for unknown system identification
NASA Astrophysics Data System (ADS)
Javed, Shazia; Ahmad, Noor Atinah
2014-07-01
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.
Template-Directed Copolymerization, Random Walks along Disordered Tracks, and Fractals
NASA Astrophysics Data System (ADS)
Gaspard, Pierre
2016-12-01
In biology, template-directed copolymerization is the fundamental mechanism responsible for the synthesis of DNA, RNA, and proteins. More than 50 years have passed since the discovery of DNA structure and its role in coding genetic information. Yet, the kinetics and thermodynamics of information processing in DNA replication, transcription, and translation remain poorly understood. Challenging issues are the facts that DNA or RNA sequences constitute disordered media for the motion of polymerases or ribosomes while errors occur in copying the template. Here, it is shown that these issues can be addressed and sequence heterogeneity effects can be quantitatively understood within a framework revealing universal aspects of information processing at the molecular scale. In steady growth regimes, the local velocities of polymerases or ribosomes along the template are distributed as the continuous or fractal invariant set of a so-called iterated function system, which determines the copying error probabilities. The growth may become sublinear in time with a scaling exponent that can also be deduced from the iterated function system.
Performance study of LMS based adaptive algorithms for unknown system identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Javed, Shazia; Ahmad, Noor Atinah
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signalmore » is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.« less
Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range
NASA Technical Reports Server (NTRS)
Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.
Extended Lagrangian Excited State Molecular Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bjorgaard, Josiah August; Sheppard, Daniel Glen; Tretiak, Sergei
In this work, an extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born–Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both formore » the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. In conclusion, the XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree–Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).« less
Extended Lagrangian Excited State Molecular Dynamics.
Bjorgaard, J A; Sheppard, D; Tretiak, S; Niklasson, A M N
2018-02-13
An extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born-Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both for the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. The XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree-Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).
Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding
NASA Astrophysics Data System (ADS)
Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.
2016-03-01
In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
McClain, Arianna D; Hekler, Eric B; Gardner, Christopher D
2013-01-01
Previous research from the fields of computer science and engineering highlight the importance of an iterative design process (IDP) to create more creative and effective solutions. This study describes IDP as a new method for developing health behavior interventions and evaluates the effectiveness of a dining hall-based intervention developed using IDP on college students' eating behavior and values. participants were 458 students (52.6% female, age = 19.6 ± 1.5 years [M ± SD]). The intervention was developed via an IDP parallel process. A cluster-randomized controlled study compared differences in eating behavior among students in 4 university dining halls (2 intervention, 2 control). The final intervention was a multicomponent, point-of-selection marketing campaign. Students in the intervention dining halls consumed significantly less junk food and high-fat meat and increased their perceived importance of eating a healthful diet relative to the control group. IDP may be valuable for the development of behavior change interventions.
Wang, Yin; Zhao, Nan-jing; Liu, Wen-qing; Yu, Yang; Fang, Li; Meng, De-shuo; Hu, Li; Zhang, Da-hai; Ma, Min-jun; Xiao, Xue; Wang, Yu; Liu, Jian-guo
2015-02-01
In recent years, the technology of laser induced breakdown spectroscopy has been developed rapidly. As one kind of new material composition detection technology, laser induced breakdown spectroscopy can simultaneously detect multi elements fast and simply without any complex sample preparation and realize field, in-situ material composition detection of the sample to be tested. This kind of technology is very promising in many fields. It is very important to separate, fit and extract spectral feature lines in laser induced breakdown spectroscopy, which is the cornerstone of spectral feature recognition and subsequent elements concentrations inversion research. In order to realize effective separation, fitting and extraction of spectral feature lines in laser induced breakdown spectroscopy, the original parameters for spectral lines fitting before iteration were analyzed and determined. The spectral feature line of' chromium (Cr I : 427.480 nm) in fly ash gathered from a coal-fired power station, which was overlapped with another line(FeI: 427.176 nm), was separated from the other one and extracted by using damped least squares method. Based on Gauss-Newton iteration, damped least squares method adds damping factor to step and adjust step length dynamically according to the feedback information after each iteration, in order to prevent the iteration from diverging and make sure that the iteration could converge fast. Damped least squares method helps to obtain better results of separating, fitting and extracting spectral feature lines and give more accurate intensity values of these spectral feature lines: The spectral feature lines of chromium in samples which contain different concentrations of chromium were separated and extracted. And then, the intensity values of corresponding spectral lines were given by using damped least squares method and least squares method separately. The calibration curves were plotted, which showed the relationship between spectral line intensity values and chromium concentrations in different samples. And then their respective linear correlations were compared. The experimental results showed that the linear correlation of the intensity values of spectral feature lines and the concentrations of chromium in different samples, which was obtained by damped least squares method, was better than that one obtained by least squares method. And therefore, damped least squares method was stable, reliable and suitable for separating, fitting and extracting spectral feature lines in laser induced breakdown spectroscopy.
The ZpiM algorithm: a method for interferometric image reconstruction in SAR/SAS.
Dias, José M B; Leitao, José M N
2002-01-01
This paper presents an effective algorithm for absolute phase (not simply modulo-2-pi) estimation from incomplete, noisy and modulo-2pi observations in interferometric aperture radar and sonar (InSAR/InSAS). The adopted framework is also representative of other applications such as optical interferometry, magnetic resonance imaging and diffraction tomography. The Bayesian viewpoint is adopted; the observation density is 2-pi-periodic and accounts for the interferometric pair decorrelation and system noise; the a priori probability of the absolute phase is modeled by a compound Gauss-Markov random field (CGMRF) tailored to piecewise smooth absolute phase images. We propose an iterative scheme for the computation of the maximum a posteriori probability (MAP) absolute phase estimate. Each iteration embodies a discrete optimization step (Z-step), implemented by network programming techniques and an iterative conditional modes (ICM) step (pi-step). Accordingly, the algorithm is termed ZpiM, where the letter M stands for maximization. An important contribution of the paper is the simultaneous implementation of phase unwrapping (inference of the 2pi-multiples) and smoothing (denoising of the observations). This improves considerably the accuracy of the absolute phase estimates compared to methods in which the data is low-pass filtered prior to unwrapping. A set of experimental results, comparing the proposed algorithm with alternative methods, illustrates the effectiveness of our approach.
Spacecraft Attitude Maneuver Planning Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Kornfeld, Richard P.
2004-01-01
A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This approach for attitude path planning makes full use of a priori constraint knowledge and is computationally tractable enough to be executed onboard a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used as is or as an initial solution to initialize additional deterministic optimization algorithms. A number of representative case examples for time-fixed and time-varying conditions yielded search times that are typically on the order of minutes, thus demonstrating the viability of this method. This approach is applicable to all deep space and planet Earth missions requiring greater spacecraft autonomy, and greatly facilitates navigation and science observation planning.
Overview of Recent DIII-D Experimental Results
NASA Astrophysics Data System (ADS)
Fenstermacher, Max; DIII-D Team
2017-10-01
Recent DIII-D experiments contributed to the ITER physics basis and to physics understanding for extrapolation to future devices. A predict-first analysis showed how shape can enhance access to RMP ELM suppression. 3D equilibrium changes from ELM control RMPs, were linked to density pumpout. Ion velocity imaging in the SOL showed 3D C2+flow perturbations near RMP induced n =1 islands. Correlation ECE reveals a 40% increase in Te turbulence during QH-mode and 70% during RMP ELM suppression vs. ELMing H-mode. A long-lived predator-prey oscillation replaces edge MHD in recent low-torque QH-mode plasmas. Spatio-temporally resolved runaway electron measurements validate the importance of synchrotron and collisional damping on RE dissipation. A new small angle slot divertor achieves strong plasma cooling and facilitates detachment access. Fast ion confinement was improved in high q_min scenarios using variable beam energy optimization. First reproducible, stable ITER baseline scenarios were established. Studies have validated a model for edge momentum transport that predicts the pedestal main-ion intrinsic velocity value and direction. Work supported by the US DOE under DE-FC02-04ER54698 and DE-AC52-07NA27344.
Evolution of extortion in Iterated Prisoner's Dilemma games.
Hilbe, Christian; Nowak, Martin A; Sigmund, Karl
2013-04-23
Iterated games are a fundamental component of economic and evolutionary game theory. They describe situations where two players interact repeatedly and have the ability to use conditional strategies that depend on the outcome of previous interactions, thus allowing for reciprocation. Recently, a new class of strategies has been proposed, so-called "zero-determinant" strategies. These strategies enforce a fixed linear relationship between one's own payoff and that of the other player. A subset of those strategies allows "extortioners" to ensure that any increase in one player's own payoff exceeds that of the other player by a fixed percentage. Here, we analyze the evolutionary performance of this new class of strategies. We show that in reasonably large populations, they can act as catalysts for the evolution of cooperation, similar to tit-for-tat, but that they are not the stable outcome of natural selection. In very small populations, however, extortioners hold their ground. Extortion strategies do particularly well in coevolutionary arms races between two distinct populations. Significantly, they benefit the population that evolves at the slower rate, an example of the so-called "Red King" effect. This may affect the evolution of interactions between host species and their endosymbionts.
NONLINEAR MULTIGRID SOLVER EXPLOITING AMGe COARSE SPACES WITH APPROXIMATION PROPERTIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Max La Cour; Villa, Umberto E.; Engsig-Karup, Allan P.
The paper introduces a nonlinear multigrid solver for mixed nite element discretizations based on the Full Approximation Scheme (FAS) and element-based Algebraic Multigrid (AMGe). The main motivation to use FAS for unstruc- tured problems is the guaranteed approximation property of the AMGe coarse spaces that were developed recently at Lawrence Livermore National Laboratory. These give the ability to derive stable and accurate coarse nonlinear discretization problems. The previous attempts (including ones with the original AMGe method, [5, 11]), were less successful due to lack of such good approximation properties of the coarse spaces. With coarse spaces with approximation properties, ourmore » FAS approach on un- structured meshes should be as powerful/successful as FAS on geometrically re ned meshes. For comparison, Newton's method and Picard iterations with an inner state-of-the-art linear solver is compared to FAS on a nonlinear saddle point problem with applications to porous media ow. It is demonstrated that FAS is faster than Newton's method and Picard iterations for the experiments considered here. Due to the guaranteed approximation properties of our AMGe, the coarse spaces are very accurate, providing a solver with the potential for mesh-independent convergence on general unstructured meshes.« less
NASA Astrophysics Data System (ADS)
Idesaki, A.; Koizumi, N.; Sugimoto, M.; Morishita, N.; Ohshima, T.; Okuno, K.
2008-03-01
A laminated material composed of glass cloth/polyimide film/epoxy resin will be used as an insulating material for superconducting coil of International Thermonuclear Experimental Reactor (ITER). In order to keep safe and stable operation of the superconducting coil system, it is indispensable to evaluate radiation resistance of the material, because the material is exposed to severe environments such as high radiation field and low temperature of 4 K. Especially, it is important to estimate the amount of gases evolved from the insulating material by irradiation, because the gases affect on the purifying system of liquid helium in the superconducting coil system. In this work, the gas evolution from the laminated material by gamma ray irradiation at liquid nitrogen temperature (77 K) was investigated, and the difference of gas evolution behavior due to difference of composition in the epoxy resin was discussed. It was found that the main gases evolved from the laminated material by the irradiation were hydrogen, carbon monoxide and carbon dioxide, and that the amount of gases evolved from the epoxy resin containing cyanate ester was about 60% less than that from the epoxy resin containing tetraglycidyl-diaminophenylmethane (TGDDM).
NASA Technical Reports Server (NTRS)
Kreider, Kevin L.; Baumeister, Kenneth J.
1996-01-01
An explicit finite difference real time iteration scheme is developed to study harmonic sound propagation in aircraft engine nacelles. To reduce storage requirements for future large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable for a harmonic monochromatic sound field, a parabolic (in time) approximation is introduced to reduce the order of the governing equation. The analysis begins with a harmonic sound source radiating into a quiescent duct. This fully explicit iteration method then calculates stepwise in time to obtain the 'steady state' harmonic solutions of the acoustic field. For stability, applications of conventional impedance boundary conditions requires coupling to explicit hyperbolic difference equations at the boundary. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.
Total-variation based velocity inversion with Bregmanized operator splitting algorithm
NASA Astrophysics Data System (ADS)
Zand, Toktam; Gholami, Ali
2018-04-01
Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.
Evolution of extortion in Iterated Prisoner’s Dilemma games
Hilbe, Christian; Nowak, Martin A.; Sigmund, Karl
2013-01-01
Iterated games are a fundamental component of economic and evolutionary game theory. They describe situations where two players interact repeatedly and have the ability to use conditional strategies that depend on the outcome of previous interactions, thus allowing for reciprocation. Recently, a new class of strategies has been proposed, so-called “zero-determinant” strategies. These strategies enforce a fixed linear relationship between one’s own payoff and that of the other player. A subset of those strategies allows “extortioners” to ensure that any increase in one player’s own payoff exceeds that of the other player by a fixed percentage. Here, we analyze the evolutionary performance of this new class of strategies. We show that in reasonably large populations, they can act as catalysts for the evolution of cooperation, similar to tit-for-tat, but that they are not the stable outcome of natural selection. In very small populations, however, extortioners hold their ground. Extortion strategies do particularly well in coevolutionary arms races between two distinct populations. Significantly, they benefit the population that evolves at the slower rate, an example of the so-called “Red King” effect. This may affect the evolution of interactions between host species and their endosymbionts. PMID:23572576
Software for MR image overlay guided needle insertions: the clinical translation process
NASA Astrophysics Data System (ADS)
Ungi, Tamas; U-Thainual, Paweena; Fritz, Jan; Iordachita, Iulian I.; Flammang, Aaron J.; Carrino, John A.; Fichtinger, Gabor
2013-03-01
PURPOSE: Needle guidance software using augmented reality image overlay was translated from the experimental phase to support preclinical and clinical studies. Major functional and structural changes were needed to meet clinical requirements. We present the process applied to fulfill these requirements, and selected features that may be applied in the translational phase of other image-guided surgical navigation systems. METHODS: We used an agile software development process for rapid adaptation to unforeseen clinical requests. The process is based on iterations of operating room test sessions, feedback discussions, and software development sprints. The open-source application framework of 3D Slicer and the NA-MIC kit provided sufficient flexibility and stable software foundations for this work. RESULTS: All requirements were addressed in a process with 19 operating room test iterations. Most features developed in this phase were related to workflow simplification and operator feedback. CONCLUSION: Efficient and affordable modifications were facilitated by an open source application framework and frequent clinical feedback sessions. Results of cadaver experiments show that software requirements were successfully solved after a limited number of operating room tests.
NASA Astrophysics Data System (ADS)
Shintani, Masaru; Umeno, Ken
2018-04-01
The power law is present ubiquitously in nature and in our societies. Therefore, it is important to investigate the characteristics of power laws in the current era of big data. In this paper we prove that the superposition of non-identical stochastic processes with power laws converges in density to a unique stable distribution. This property can be used to explain the universality of stable laws that the sums of the logarithmic returns of non-identical stock price fluctuations follow stable distributions.
Data-Driven Learning of Total and Local Energies in Elemental Boron
NASA Astrophysics Data System (ADS)
Deringer, Volker L.; Pickard, Chris J.; Csányi, Gábor
2018-04-01
The allotropes of boron continue to challenge structural elucidation and solid-state theory. Here we use machine learning combined with random structure searching (RSS) algorithms to systematically construct an interatomic potential for boron. Starting from ensembles of randomized atomic configurations, we use alternating single-point quantum-mechanical energy and force computations, Gaussian approximation potential (GAP) fitting, and GAP-driven RSS to iteratively generate a representation of the element's potential-energy surface. Beyond the total energies of the very different boron allotropes, our model readily provides atom-resolved, local energies and thus deepened insight into the frustrated β -rhombohedral boron structure. Our results open the door for the efficient and automated generation of GAPs, and other machine-learning-based interatomic potentials, and suggest their usefulness as a tool for materials discovery.
Data-Driven Learning of Total and Local Energies in Elemental Boron.
Deringer, Volker L; Pickard, Chris J; Csányi, Gábor
2018-04-13
The allotropes of boron continue to challenge structural elucidation and solid-state theory. Here we use machine learning combined with random structure searching (RSS) algorithms to systematically construct an interatomic potential for boron. Starting from ensembles of randomized atomic configurations, we use alternating single-point quantum-mechanical energy and force computations, Gaussian approximation potential (GAP) fitting, and GAP-driven RSS to iteratively generate a representation of the element's potential-energy surface. Beyond the total energies of the very different boron allotropes, our model readily provides atom-resolved, local energies and thus deepened insight into the frustrated β-rhombohedral boron structure. Our results open the door for the efficient and automated generation of GAPs, and other machine-learning-based interatomic potentials, and suggest their usefulness as a tool for materials discovery.
Scalable and fault tolerant orthogonalization based on randomized distributed data aggregation
Gansterer, Wilfried N.; Niederbrucker, Gerhard; Straková, Hana; Schulze Grotthoff, Stefan
2013-01-01
The construction of distributed algorithms for matrix computations built on top of distributed data aggregation algorithms with randomized communication schedules is investigated. For this purpose, a new aggregation algorithm for summing or averaging distributed values, the push-flow algorithm, is developed, which achieves superior resilience properties with respect to failures compared to existing aggregation methods. It is illustrated that on a hypercube topology it asymptotically requires the same number of iterations as the optimal all-to-all reduction operation and that it scales well with the number of nodes. Orthogonalization is studied as a prototypical matrix computation task. A new fault tolerant distributed orthogonalization method rdmGS, which can produce accurate results even in the presence of node failures, is built on top of distributed data aggregation algorithms. PMID:24748902
Interactive learning in 2×2 normal form games by neural network agents
NASA Astrophysics Data System (ADS)
Spiliopoulos, Leonidas
2012-11-01
This paper models the learning process of populations of randomly rematched tabula rasa neural network (NN) agents playing randomly generated 2×2 normal form games of all strategic classes. This approach has greater external validity than the existing models in the literature, each of which is usually applicable to narrow subsets of classes of games (often a single game) and/or to fixed matching protocols. The learning prowess of NNs with hidden layers was impressive as they learned to play unique pure strategy equilibria with near certainty, adhered to principles of dominance and iterated dominance, and exhibited a preference for risk-dominant equilibria. In contrast, perceptron NNs were found to perform significantly worse than hidden layer NN agents and human subjects in experimental studies.
Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg
2016-12-13
We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghoos, K., E-mail: kristel.ghoos@kuleuven.be; Dekeyser, W.; Samaey, G.
2016-10-01
The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracymore » by making use of averaging in the Random Noise coupling technique.« less
The Shark Random Swim - (Lévy Flight with Memory)
NASA Astrophysics Data System (ADS)
Businger, Silvia
2018-05-01
The Elephant Random Walk (ERW), first introduced by Schütz and Trimper (Phys Rev E 70:045101, 2004), is a one-dimensional simple random walk on Z having a memory about the whole past. We study the Shark Random Swim, a random walk with memory about the whole past, whose steps are α -stable distributed with α \\in (0,2] . Our aim in this work is to study the impact of the heavy tailed step distributions on the asymptotic behavior of the random walk. We shall see that, as for the ERW, the asymptotic behavior of the Shark Random Swim depends on its memory parameter p, and that a phase transition can be observed at the critical value p=1/α.
Perl Modules for Constructing Iterators
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2009-01-01
The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.
A Random Walk Approach to Query Informative Constraints for Clustering.
Abin, Ahmad Ali
2017-08-09
This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.
Mean first passage time for random walk on dual structure of dendrimer
NASA Astrophysics Data System (ADS)
Li, Ling; Guan, Jihong; Zhou, Shuigeng
2014-12-01
The random walk approach has recently been widely employed to study the relations between the underlying structure and dynamic of complex systems. The mean first-passage time (MFPT) for random walks is a key index to evaluate the transport efficiency in a given system. In this paper we study analytically the MFPT in a dual structure of dendrimer network, Husimi cactus, which has different application background and different structure (contains loops) from dendrimer. By making use of the iterative construction, we explicitly determine both the partial mean first-passage time (PMFT, the average of MFPTs to a given target) and the global mean first-passage time (GMFT, the average of MFPTs over all couples of nodes) on Husimi cactus. The obtained closed-form results show that PMFPT and EMFPT follow different scaling with the network order, suggesting that the target location has essential influence on the transport efficiency. Finally, the impact that loop structure could bring is analyzed and discussed.
A Numerical Theory for Impedance Education in Three-Dimensional Normal Incidence Tubes
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Jones, Michael G.
2016-01-01
A method for educing the locally-reacting acoustic impedance of a test sample mounted in a 3-D normal incidence impedance tube is presented and validated. The unique feature of the method is that the excitation frequency (or duct geometry) may be such that high-order duct modes may exist. The method educes the impedance, iteratively, by minimizing an objective function consisting of the difference between the measured and numerically computed acoustic pressure at preselected measurement points in the duct. The method is validated on planar and high-order mode sources with data synthesized from exact mode theory. These data are then subjected to random jitter to simulate the effects of measurement uncertainties on the educed impedance spectrum. The primary conclusions of the study are 1) Without random jitter the method is in excellent agreement with that for known impedance samples, and 2) Random jitter that is compatible to that found in a typical experiment has minimal impact on the accuracy of the educed impedance.
Radiative transfer theory for active remote sensing of a forested canopy
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.
1989-01-01
A canopy is modeled as a two-layer medium above a rough interface. The upper layer stands for the forest crown, with the leaves modeled as randomly oriented and distributed disks and needles and the branches modeled as randomly oriented finite dielectric cylinders. The lower layer contains the tree trunks, modeled as randomly positioned vertical cylinders above the rough soil. Radiative-transfer theory is applied to calculate EM scattering from such a canopy, is expressed in terms of the scattering-amplitude tensors (SATs). For leaves, the generalized Rayleigh-Gans approximation is applied, whereas the branch and trunk SATs are obtained by estimating the inner field by fields inside a similar cylinder of infinite length. The Kirchhoff method is used to calculate the soil SAT. For a plane wave exciting the canopy, the radiative-transfer equations are solved by iteration to the first order in albedo of the leaves and the branches. Numerical results are illustrated as a function of the incidence angle.
Monte Carlo based toy model for fission process
NASA Astrophysics Data System (ADS)
Kurniadi, R.; Waris, A.; Viridi, S.
2014-09-01
There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.
Luis Martínez Fuentes, Jose; Moreno, Ignacio
2018-03-05
A new technique for encoding the amplitude and phase of diffracted fields in digital holography is proposed. It is based on a random spatial multiplexing of two phase-only diffractive patterns. The first one is the phase information of the intended pattern, while the second one is a diverging optical element whose purpose is the control of the amplitude. A random number determines the choice between these two diffractive patterns at each pixel, and the amplitude information of the desired field governs its discrimination threshold. This proposed technique is computationally fast and does not require iterative methods, and the complex field reconstruction appears on axis. We experimentally demonstrate this new encoding technique with holograms implemented onto a flicker-free phase-only spatial light modulator (SLM), which allows the axial generation of such holograms. The experimental verification includes the phase measurement of generated patterns with a phase-shifting polarization interferometer implemented in the same experimental setup.
Automating Microbial Directed Evolution For Bioengineering Applications
NASA Astrophysics Data System (ADS)
Lee, A.; Demachkie, I. S.; Sardesh, N.; Arismendi, D.; Ouandji, C.; Wang, J.; Blaich, J.; Gentry, D.
2016-12-01
From a micro-biology perspective, directed evolution is a technique that uses controlled environmental pressures to select for a desired phenotype. Directed evolution has the distinct advantage over rational design of not needing extensive knowledge of the genome or pathways associated with a microorganism to induce phenotypes. However, there are currently limitations to the applicability of this technique including being time-consuming, error-prone, and dependent on existing assays that may lack selectivity for the given phenotype. The AADEC (Autonomous Adaptive Directed Evolution Chamber) system is a proof-of-concept instrument to automate and improve the technique such that directed evolution can be used more effectively as a general bioengineering tool. A series of tests using the automated system and comparable by-hand survival assay measurements have been carried out using UV-C radiation and Escherichia coli cultures in order to demonstrate the advantages of the AADEC versus traditional implementations of directed evolution such as random mutagenesis. AADEC uses UV-C exposure as both a source of environmental stress and mutagenesis, so in order to evaluate the UV-C tolerance obtained from the cultures, a manual UV-C exposure survival assay was developed alongside the device to compare the survival fractions at a fixed dosage. This survival assay involves exposing E.coli to UV-C radiation using a custom-designed exposure hood to control the flux and dose. Surviving cells are counted then transferred to the next iteration and so on for several iterations to calculate the survival fractions for each exposure iteration. This survival assay primarily serves as a baseline for the AADEC device, allowing quantification of the differences between the AADEC system over the manual approach. The primary data of comparison is survival fractions; this is obtained by optical density and plate counts in the manual assay and by optical density growth curve fits pre- and post-exposure in the automated case. This data can then be compiled to calculate trends over the iterations to characterize increasing UV-C resistance of the E.coli strains. The observed trends are statistically indistinguishable through several iterations from both sources.
SU-E-P-49: Evaluation of Image Quality and Radiation Dose of Various Unenhanced Head CT Protocols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, L; Khan, M; Alapati, K
2015-06-15
Purpose: To evaluate the diagnostic value of various unenhanced head CT protocols and predicate acceptable radiation dose level for head CT exam. Methods: Our retrospective analysis included 3 groups, 20 patients per group, who underwent clinical routine unenhanced adult head CT examination. All exams were performed axially with 120 kVp. Three protocols, 380 mAs without iterative reconstruction and automAs, 340 mAs with iterative reconstruction without automAs, 340 mAs with iterative reconstruction and automAs, were applied on each group patients respectively. The images were reconstructed with H30, J30 for brain window and H60, J70 for bone window. Images acquired with threemore » protocols were randomized and blindly reviewed by three radiologists. A 5 point scale was used to rate each exam The percentage of exam score above 3 and average scores of each protocol were calculated for each reviewer and tissue types. Results: For protocols without automAs, the average scores of bone window with iterative reconstruction were higher than those without iterative reconstruction for each reviewer although the radiation dose was 10 percentage lower. 100 percentage exams were scored 3 or higher and the average scores were above 4 for both brain and bone reconstructions. The CTDIvols are 64.4 and 57.8 mGy of 380 and 340 mAs, respectively. With automAs, the radiation dose varied with head size, resulting in 47.5 mGy average CTDIvol between 39.5 and 56.5 mGy. 93 and 98 percentage exams were scored great than 3 for brain and bone windows, respectively. The diagnostic confidence level and image quality of exams with AutomAs were less than those without AutomAs for each reviewer. Conclusion: According to these results, the mAs was reduced to 300 with automAs OFF for head CT exam. The radiation dose was 20 percentage lower than the original protocol and the CTDIvol was reduced to 51.2 mGy.« less
Effectiveness of Ivabradine in Treating Stable Angina Pectoris
Ye, Liwen; Ke, Dazhi; Chen, Qingwei; Li, Guiqiong; Deng, Wei; Wu, Zhiqin
2016-01-01
Abstract Many studies show that ivabradine is effective for stable angina. This meta-analysis was performed to determine the effect of treatment duration and control group type on ivabradine efficacy in stable angina pectoris. Relevant articles in the English language in the PUBMED and EMBASE databases and related websites were identified by using the search terms “ivabradine,” “angina,” “randomized controlled trials,” and “Iva.” The final search date was November 2, 2015. Articles were included if they were published randomized controlled trials that related to ivabradine treatment of stable angina pectoris. Patients with stable angina pectoris were included. The patients were classified according to treatment duration (<3 vs ≥3 months) or type of control group (placebo vs beta-receptor blocker). Angina outcomes were heart rate at rest or peak, exercise duration, and time to angina onset. Seven articles were selected. There were 3747 patients: 2100 and 1647 were in the ivabradine and control groups, respectively. The ivabradine group had significantly longer exercise duration when they had been treated for at least 3 months, but not when treatment time was less than 3 months. Ivabradine significantly improved time to angina onset regardless of treatment duration. Control group type did not influence the effect of exercise duration (significant) or time to angina onset (significant). Compared with beta-blocker and placebo, ivabradine improved exercise duration and time to onset of angina in patients with stable angina. However, its ability to improve exercise duration only became significant after at least 3 months of treatment. PMID:27057864
NASA Astrophysics Data System (ADS)
Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele
2016-07-01
This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.
Why and how Mastering an Incremental and Iterative Software Development Process
NASA Astrophysics Data System (ADS)
Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe
2004-06-01
One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.
NASA Astrophysics Data System (ADS)
Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco
2017-04-01
Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This process is repeated until a threshold in the objective function is met or insufficient changes are produced in successive iterations.
Multiwavelength ytterbium-Brillouin random Rayleigh feedback fiber laser
NASA Astrophysics Data System (ADS)
Wu, Han; Wang, Zinan; Fan, Mengqiu; Li, Jiaqi; Meng, Qingyang; Xu, Dangpeng; Rao, Yunjiang
2018-03-01
In this letter, we experimentally demonstrate the multiwavelength ytterbium-Brillouin random fiber laser for the first time, in the half-open cavity formed by a fiber loop mirror and randomly distributed Rayleigh mirrors. With a cladding-pumped ytterbium-doped fiber and a long TrueWave fiber, the narrow linewidth Brillouin pump can generate multiple Brillouin Stokes lines with hybrid ytterbium-Brillouin gain. Up to six stable channels with a spacing of about 0.06 nm are obtained. This work extends the operation wavelength of the multiwavelength Brillouin random fiber laser to the 1 µm band, and has potential in various applications.
NASA Astrophysics Data System (ADS)
Sun, Shi-Hai; Liang, Lin-Mei
2012-08-01
Phase randomization is a very important assumption in the BB84 quantum key distribution (QKD) system with weak coherent source; otherwise, eavesdropper may spy the final key. In this Letter, a stable and monitored active phase randomization scheme for the one-way and two-way QKD system is proposed and demonstrated in experiments. Furthermore, our scheme gives an easy way for Alice to monitor the degree of randomization in experiments. Therefore, we expect our scheme to become a standard part in future QKD systems due to its secure significance and feasibility.
NASA Astrophysics Data System (ADS)
Lee, Hochul; Ebrahimi, Farbod; Amiri, Pedram Khalili; Wang, Kang L.
2017-05-01
A true random number generator based on perpendicularly magnetized voltage-controlled magnetic tunnel junction devices (MRNG) is presented. Unlike MTJs used in memory applications where a stable bit is needed to store information, in this work, the MTJ is intentionally designed with small perpendicular magnetic anisotropy (PMA). This allows one to take advantage of the thermally activated fluctuations of its free layer as a stochastic noise source. Furthermore, we take advantage of the voltage dependence of anisotropy to temporarily change the MTJ state into an unstable state when a voltage is applied. Since the MTJ has two energetically stable states, the final state is randomly chosen by thermal fluctuation. The voltage controlled magnetic anisotropy (VCMA) effect is used to generate the metastable state of the MTJ by lowering its energy barrier. The proposed MRNG achieves a high throughput (32 Gbps) by implementing a 64 ×64 MTJ array into CMOS circuits and executing operations in a parallel manner. Furthermore, the circuit consumes very low energy to generate a random bit (31.5 fJ/bit) due to the high energy efficiency of the voltage-controlled MTJ switching.
Population differentiation in Pacific salmon: local adaptation, genetic drift, or the environment?
Adkison, Milo D.
1995-01-01
Morphological, behavioral, and life-history differences between Pacific salmon (Oncorhynchus spp.) populations are commonly thought to reflect local adaptation, and it is likewise common to assume that salmon populations separated by small distances are locally adapted. Two alternatives to local adaptation exist: random genetic differentiation owing to genetic drift and founder events, and genetic homogeneity among populations, in which differences reflect differential trait expression in differing environments. Population genetics theory and simulations suggest that both alternatives are possible. With selectively neutral alleles, genetic drift can result in random differentiation despite many strays per generation. Even weak selection can prevent genetic drift in stable populations; however, founder effects can result in random differentiation despite selective pressures. Overlapping generations reduce the potential for random differentiation. Genetic homogeneity can occur despite differences in selective regimes when straying rates are high. In sum, localized differences in selection should not always result in local adaptation. Local adaptation is favored when population sizes are large and stable, selection is consistent over large areas, selective diffeentials are large, and straying rates are neither too high nor too low. Consideration of alternatives to local adaptation would improve both biological research and salmon conservation efforts.
Resolving the iterated prisoner's dilemma: theory and reality.
Raihani, N J; Bshary, R
2011-08-01
Pairs of unrelated individuals face a prisoner's dilemma if cooperation is the best mutual outcome, but each player does best to defect regardless of his partner's behaviour. Although mutual defection is the only evolutionarily stable strategy in one-shot games, cooperative solutions based on reciprocity can emerge in iterated games. Among the most prominent theoretical solutions are the so-called bookkeeping strategies, such as tit-for-tat, where individuals copy their partner's behaviour in the previous round. However, the lack of empirical data conforming to predicted strategies has prompted the suggestion that the iterated prisoner's dilemma (IPD) is neither a useful nor realistic basis for investigating cooperation. Here, we discuss several recent studies where authors have used the IPD framework to interpret their data. We evaluate the validity of their approach and highlight the diversity of proposed solutions. Strategies based on precise accounting are relatively uncommon, perhaps because the full set of assumptions of the IPD model are rarely satisfied. Instead, animals use a diverse array of strategies that apparently promote cooperation, despite the temptation to cheat. These include both positive and negative reciprocity, as well as long-term mutual investments based on 'friendships'. Although there are various gaps in these studies that remain to be filled, we argue that in most cases, individuals could theoretically benefit from cheating and that cooperation cannot therefore be explained with the concept of positive pseudo-reciprocity. We suggest that by incorporating empirical data into the theoretical framework, we may gain fundamental new insights into the evolution of mutual reciprocal investment in nature. © 2011 The Authors. Journal of Evolutionary Biology © 2011 European Society For Evolutionary Biology.
NASA Astrophysics Data System (ADS)
Fable, E.; Angioni, C.; Ivanov, A. A.; Lackner, K.; Maj, O.; Medvedev, S. Yu; Pautasso, G.; Pereverzev, G. V.; Treutterer, W.; the ASDEX Upgrade Team
2013-07-01
The modelling of tokamak scenarios requires the simultaneous solution of both the time evolution of the plasma kinetic profiles and of the magnetic equilibrium. Their dynamical coupling involves additional complications, which are not present when the two physical problems are solved separately. Difficulties arise in maintaining consistency in the time evolution among quantities which appear in both the transport and the Grad-Shafranov equations, specifically the poloidal and toroidal magnetic fluxes as a function of each other and of the geometry. The required consistency can be obtained by means of iteration cycles, which are performed outside the equilibrium code and which can have different convergence properties depending on the chosen numerical scheme. When these external iterations are performed, the stability of the coupled system becomes a concern. In contrast, if these iterations are not performed, the coupled system is numerically stable, but can become physically inconsistent. By employing a novel scheme (Fable E et al 2012 Nucl. Fusion submitted), which ensures stability and physical consistency among the same quantities that appear in both the transport and magnetic equilibrium equations, a newly developed version of the ASTRA transport code (Pereverzev G V et al 1991 IPP Report 5/42), which is coupled to the SPIDER equilibrium code (Ivanov A A et al 2005 32nd EPS Conf. on Plasma Physics (Tarragona, 27 June-1 July) vol 29C (ECA) P-5.063), in both prescribed- and free-boundary modes is presented here for the first time. The ASTRA-SPIDER coupled system is then applied to the specific study of the modelling of controlled current ramp-up in ASDEX Upgrade discharges.
NASA Astrophysics Data System (ADS)
MacArt, Jonathan F.; Mueller, Michael E.
2016-12-01
Two formally second-order accurate, semi-implicit, iterative methods for the solution of scalar transport-reaction equations are developed for Direct Numerical Simulation (DNS) of low Mach number turbulent reacting flows. The first is a monolithic scheme based on a linearly implicit midpoint method utilizing an approximately factorized exact Jacobian of the transport and reaction operators. The second is an operator splitting scheme based on the Strang splitting approach. The accuracy properties of these schemes, as well as their stability, cost, and the effect of chemical mechanism size on relative performance, are assessed in two one-dimensional test configurations comprising an unsteady premixed flame and an unsteady nonpremixed ignition, which have substantially different Damköhler numbers and relative stiffness of transport to chemistry. All schemes demonstrate their formal order of accuracy in the fully-coupled convergence tests. Compared to a (non-)factorized scheme with a diagonal approximation to the chemical Jacobian, the monolithic, factorized scheme using the exact chemical Jacobian is shown to be both more stable and more economical. This is due to an improved convergence rate of the iterative procedure, and the difference between the two schemes in convergence rate grows as the time step increases. The stability properties of the Strang splitting scheme are demonstrated to outpace those of Lie splitting and monolithic schemes in simulations at high Damköhler number; however, in this regime, the monolithic scheme using the approximately factorized exact Jacobian is found to be the most economical at practical CFL numbers. The performance of the schemes is further evaluated in a simulation of a three-dimensional, spatially evolving, turbulent nonpremixed planar jet flame.
TRUST84. Sat-Unsat Flow in Deformable Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narasimhan, T.N.
1984-11-01
TRUST84 solves for transient and steady-state flow in variably saturated deformable media in one, two, or three dimensions. It can handle porous media, fractured media, or fractured-porous media. Boundary conditions may be an arbitrary function of time. Sources or sinks may be a function of time or of potential. The theoretical model considers a general three-dimensional field of flow in conjunction with a one-dimensional vertical deformation field. The governing equation expresses the conservation of fluid mass in an elemental volume that has a constant volume of solids. Deformation of the porous medium may be nonelastic. Permeability and the compressibility coefficientsmore » may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may be characterized by hysteresis. The relation between pore pressure change and effective stress change may be a function of saturation. The basic calculational model of the conductive heat transfer code TRUMP is applied in TRUST84 to the flow of fluids in porous media. The model combines an integrated finite difference algorithm for numerically solving the governing equation with a mixed explicit-implicit iterative scheme in which the explicit changes in potential are first computed for all elements in the system, after which implicit corrections are made only for those elements for which the stable time-step is less than the time-step being used. Time-step sizes are automatically controlled to optimize the number of iterations, to control maximum change to potential during a time-step, and to obtain desired output information. Time derivatives, estimated on the basis of system behavior during the two previous time-steps, are used to start the iteration process and to evaluate nonlinear coefficients. Both heterogeneity and anisotropy can be handled.« less
Allmendinger, Thomas; Kunz, Andreas S; Veyhl-Wichmann, Maike; Ergün, Süleyman; Bley, Thorsten A; Petritsch, Bernhard
2017-01-01
Background Coronary artery calcium (CAC) scoring is a widespread tool for cardiac risk assessment in asymptomatic patients and accompanying possible adverse effects, i.e. radiation exposure, should be as low as reasonably achievable. Purpose To evaluate a new iterative reconstruction (IR) algorithm for dose reduction of in vitro coronary artery calcium scoring at different tube currents. Material and Methods An anthropomorphic calcium scoring phantom was scanned in different configurations simulating slim, average-sized, and large patients. A standard calcium scoring protocol was performed on a third-generation dual-source CT at 120 kVp tube voltage. Reference tube current was 80 mAs as standard and stepwise reduced to 60, 40, 20, and 10 mAs. Images were reconstructed with weighted filtered back projection (wFBP) and a new version of an established IR kernel at different strength levels. Calcifications were quantified calculating Agatston and volume scores. Subjective image quality was visualized with scans of an ex vivo human heart. Results In general, Agatston and volume scores remained relatively stable between 80 and 40 mAs and increased at lower tube currents, particularly in the medium and large phantom. IR reduced this effect, as both Agatston and volume scores decreased with increasing levels of IR compared to wFBP (P < 0.001). Depending on selected parameters, radiation dose could be lowered by up to 86% in the large size phantom when selecting a reference tube current of 10 mAs with resulting Agatston levels close to the reference settings. Conclusion New iterative reconstruction kernels may allow for reduction in tube current for established Agatston scoring protocols and consequently for substantial reduction in radiation exposure. PMID:28607763
Khan, A K; Hussain, A Z M I
2012-08-01
The curriculum represents the expression of educational ideas in practice. Ophthalmic education is the corner stone to improve eye care globally. Curriculum needs continuous modification varying in different geographic locations. Though 90% of common conditions are either preventable or curable but emphasis on the common conditions is inadequate. This is a stepwise descriptive study aiming to develop a community based ophthalmology curriculum for undergraduate medical course in Bangladesh conducted during March 2007 to February 2008 at UniSA School of Public Health and Life Sciences, University of South Asia, Banani, Dhaka. Delphi technique, a modified qualitative method was used to accumulate data and reaching a consensus opinion for developing the curriculum. Study approach includes two iterative rounds and finally a workshop. Iteration of round-I was "What are the eye diseases with overall knowledge of their management one MBBS physician should acquire"; followed by a list of eye diseases and topics for expert opinion. The response was collated. Iteration round-II was "How much a MBBS student should have percentage of knowledge, attitude and skills on each topic while being taught". The response was collated and presented to panel of expert ophthalmologists for discussion and validation. In the round-I Delphi, 400 (62%) out to total 641 ophthalmologist were randomly selected dividing in categories (62% in each) of Professor-22, Associate Professor-12, Assistant Professor-26, Consultant-27, ophthalmologists working in NGO-56 and ophthalmologists in private sector-257. Sixty (15%) responded with opinion. In the round-II, 200 (31%) including 60 of round-I, selected randomly but proportionately as before. Forty five (22.5%) responded with opinion. Result collated. The results and opinion of respondents were presented at a workshop attended by 24 (80%), out of 30 invited expert ophthalmic specialists for discussion, criticism, opinion, addition, modification and finally for validation. On the basis of the opinion of the respondents, reviewing literature, analyzing the ocular disease pattern in Bangladesh and also analyzing the present ophthalmology curriculum, a community and need based ophthalmology curriculum for undergraduate medical course in Bangladesh was developed. This research would help developing community and need based ophthalmology curriculum for undergraduate medical course in Bangladesh.
NASA Technical Reports Server (NTRS)
Padovan, J.; Lackney, J.
1986-01-01
The current paper develops a constrained hierarchical least square nonlinear equation solver. The procedure can handle the response behavior of systems which possess indefinite tangent stiffness characteristics. Due to the generality of the scheme, this can be achieved at various hierarchical application levels. For instance, in the case of finite element simulations, various combinations of either degree of freedom, nodal, elemental, substructural, and global level iterations are possible. Overall, this enables a solution methodology which is highly stable and storage efficient. To demonstrate the capability of the constrained hierarchical least square methodology, benchmarking examples are presented which treat structure exhibiting highly nonlinear pre- and postbuckling behavior wherein several indefinite stiffness transitions occur.
igun - A program for the simulation of positive ion extraction including magnetic fields
NASA Astrophysics Data System (ADS)
Becker, R.; Herrmannsfeldt, W. B.
1992-04-01
igun is a program for the simulation of positive ion extraction from plasmas. It is based on the well known program egun for the calculation of electron and ion trajectories in electron guns and lenses. The mathematical treatment of the plasma sheath is based on a simple analytical model, which provides a numerically stable calculation of the sheath potentials. In contrast to other ion extraction programs, igun is able to determine the extracted ion current in succeeding cycles of iteration by itself. However, it is also possible to set values of current, plasma density, or ion current density. Either axisymmetric or rectangular coordinates can be used, including axisymmetric or transverse magnetic fields.
NASA Technical Reports Server (NTRS)
Gomberg, R. I.; Buglia, J. J.
1979-01-01
An iterative technique which recovers density profiles in a nonhomogeneous absorbing atmosphere is derived. The technique is based on the concept of factoring a function of the density profile into the product of a known term and a term which is not known, but whose power series expansion can be found. This series converges rapidly under a wide range of conditions. A demonstration example of simulated data from a high resolution infrared heterodyne instrument is inverted. For the examples studied, the technique is shown to be capable of extracting features of ozone profiles in the troposphere and to be particularly stable.
Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems
NASA Astrophysics Data System (ADS)
Arrarás, A.; Portero, L.; Yotov, I.
2014-01-01
We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.
Gao, Jian-Wei; Gao, Xue-Min; Zou, Ting; Zhao, Tian-Meng; Wang, Dong-Hua; Wu, Zong-Gui; Ren, Chang-Jie; Wang, Xing; Geng, Nai-Zhi; Zhao, Ming-Jun; Liang, Qiu-Ming; Feng, Xing; Yang, Bai-Song; Shi, Jun-Ling; Hua, Qi
2018-03-01
To evaluate the effectiveness and safety of Xinling Wan on patients with stable angina pectoris, a randomized, double-blinded, placebo parallel-controlled, multicenter clinical trial was conducted. A total of 232 subjects were enrolled and randomly divided into experiment group and placebo group. The experiment group was treated with Xinling Wan (two pills each time, three times daily) for 4 weeks, and the placebo group was treated with placebo. The effectiveness evaluation showed that Xinling Wan could significantly increase the total duration of treadmill exercise among patients with stable angina pectoris. FAS analysis showed that the difference value of the total exercise duration was between experiment group (72.11±139.32) s and placebo group (31.25±108.32) s. Xinling Wan could remarkably increase the total effective rate of angina pectoris symptom score, and the analysis showed that the total effective rate was 78.95% in experiment group and 42.61% in placebo group. The reduction of nitroglycerin dose was (2.45±2.41) tablets in experiment group and (0.50±2.24) tablets in placebo group on the basis of FAS analysis. The decrease of symptom integral was (4.68±3.49) in experiment group and (3.19±3.31) in placebo group based on FAS analysis. Besides, Xinling Wan could decrease the weekly attack time and the duration of angina pectoris. PPS analysis results were similar to those of FAS analysis. In conclusion, Xinling Wan has an obvious therapeutic effect in treating stable angina pectoris, with a good safety and a low incidence of adverse event and adverse reaction in experiment group. Copyright© by the Chinese Pharmaceutical Association.
NASA Astrophysics Data System (ADS)
Zausner, Tobi
Chaos theory may provide models for creativity and for the personality of the artist. A collection of speculative hypotheses examines the connection between art and such fundamentals of non-linear dynamics as iteration, dissipative processes, open systems, entropy, sensitivity to stimuli, autocatalysis, subsystems, bifurcations, randomness, unpredictability, irreversibility, increasing levels of organization, far-from-equilibrium conditions, strange attractors, period doubling, intermittency and self-similar fractal organization. Non-linear dynamics may also explain why certain individuals suffer mental disorders while others remain intact during a lifetime of sustained creative output.
specsim: A Fortran-77 program for conditional spectral simulation in 3D
NASA Astrophysics Data System (ADS)
Yao, Tingting
1998-12-01
A Fortran 77 program, specsim, is presented for conditional spectral simulation in 3D domains. The traditional Fourier integral method allows generating random fields with a given covariance spectrum. Conditioning to local data is achieved by an iterative identification of the conditional phase information. A flowchart of the program is given to illustrate the implementation procedures of the program. A 3D case study is presented to demonstrate application of the program. A comparison with the traditional sequential Gaussian simulation algorithm emphasizes the advantages and drawbacks of the proposed algorithm.
Greedy Gossip With Eavesdropping
NASA Astrophysics Data System (ADS)
Ustebay, Deniz; Oreshkin, Boris N.; Coates, Mark J.; Rabbat, Michael G.
2010-07-01
This paper presents greedy gossip with eavesdropping (GGE), a novel randomized gossip algorithm for distributed computation of the average consensus problem. In gossip algorithms, nodes in the network randomly communicate with their neighbors and exchange information iteratively. The algorithms are simple and decentralized, making them attractive for wireless network applications. In general, gossip algorithms are robust to unreliable wireless conditions and time varying network topologies. In this paper we introduce GGE and demonstrate that greedy updates lead to rapid convergence. We do not require nodes to have any location information. Instead, greedy updates are made possible by exploiting the broadcast nature of wireless communications. During the operation of GGE, when a node decides to gossip, instead of choosing one of its neighbors at random, it makes a greedy selection, choosing the node which has the value most different from its own. In order to make this selection, nodes need to know their neighbors' values. Therefore, we assume that all transmissions are wireless broadcasts and nodes keep track of their neighbors' values by eavesdropping on their communications. We show that the convergence of GGE is guaranteed for connected network topologies. We also study the rates of convergence and illustrate, through theoretical bounds and numerical simulations, that GGE consistently outperforms randomized gossip and performs comparably to geographic gossip on moderate-sized random geometric graph topologies.
2.5D transient electromagnetic inversion with OCCAM method
NASA Astrophysics Data System (ADS)
Li, R.; Hu, X.
2016-12-01
In the application of time-domain electromagnetic method (TEM), some multidimensional inversion schemes are applied for imaging in the past few decades to overcome great error produced by 1D model inversion when the subsurface structure is complex. The current mainstream multidimensional inversion for EM data, with the finite-difference time-domain (FDTD) forward method, mainly implemented by Nonlinear Conjugate Gradient (NLCG). But the convergence rate of NLCG heavily depends on Lagrange multiplier and maybe fail to converge. We use the OCCAM inversion method to avoid the weakness. OCCAM inversion is proven to be a more stable and reliable method to image the subsurface 2.5D electrical conductivity. Firstly, we simulate the 3D transient EM fields governed by Maxwell's equations with FDTD method. Secondly, we use the OCCAM inversion scheme with the appropriate objective error functional we established to image the 2.5D structure. And the data space OCCAM's inversion (DASOCC) strategy based on OCCAM scheme were given in this paper. The sensitivity matrix is calculated with the method of time-integrated back-propagated fields. Imaging result of example model shown in Fig. 1 have proven that the OCCAM scheme is an efficient inversion method for TEM with FDTD method. The processes of the inversion iterations have shown the great ability of convergence with few iterations. Summarizing the process of the imaging, we can make the following conclusions. Firstly, the 2.5D imaging in FDTD system with OCCAM inversion demonstrates that we could get desired imaging results for the resistivity structure in the homogeneous half-space. Secondly, the imaging results usually do not over-depend on the initial model, but the iteration times can be reduced distinctly if the background resistivity of initial model get close to the truthful model. So it is batter to set the initial model based on the other geologic information in the application. When the background resistivity fit the truthful model well, the imaging of anomalous body only need a few iteration steps. Finally, the speed of imaging vertical boundaries is slower than the speed of imaging the horizontal boundaries.
NASA Astrophysics Data System (ADS)
Poli, Francesca
2012-10-01
Steady state scenarios envisaged for ITER aim at optimizing the bootstrap current, while maintaining sufficient confinement and stability to provide the necessary fusion yield. Non-inductive scenarios will need to operate with Internal Transport Barriers (ITBs) in order to reach adequate fusion gain at typical currents of 9 MA. However, the large pressure gradients associated with ITBs in regions of weak or negative magnetic shear can be conducive to ideal MHD instabilities in a wide range of βN, reducing the no-wall limit. Scenarios are established as relaxed flattop states with time-dependent transport simulations with TSC [1]. Fully non-inductive configurations with current in the range of 7-10 MA and various heating mixes (NB, EC, IC and LH) have been studied against variations of the pressure profile peaking and of the Greenwald fraction. It is found that stable equilibria have qmin> 2 and moderate ITBs at 2/3 of the minor radius [2]. The ExB flow shear from toroidal plasma rotation is expected to be low in ITER, with a major role in the ITB dynamics being played by magnetic geometry. Combinations of H&CD sources that maintain reverse or weak magnetic shear profiles throughout the discharge and ρ(qmin)>=0.5 are the focus of this work. The ITER EC upper launcher, designed for NTM control, can provide enough current drive off-axis to sustain moderate ITBs at mid-radius and maintain a non-inductive current of 8-9MA and H98>=1.5 with the day one heating mix. LH heating and current drive is effective in modifying the current profile off-axis, facilitating the formation of stronger ITBs in the rampup phase, their sustainment at larger radii and larger bootstrap fraction. The implications for steady state operation and fusion performance are discussed.[4pt] [1] Jardin S.C. et al, J. Comput. Phys. 66 (1986) 481[0pt] [2] Poli F.M. et al, Nucl. Fusion 52 (2012) 063027.
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.
Can An Evolutionary Process Create English Text?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.
Critics of the conventional theory of biological evolution have asserted that while natural processes might result in some limited diversity, nothing fundamentally new can arise from 'random' evolution. In response, biologists such as Richard Dawkins have demonstrated that a computer program can generate a specific short phrase via evolution-like iterations starting with random gibberish. While such demonstrations are intriguing, they are flawed in that they have a fixed, pre-specified future target, whereas in real biological evolution there is no fixed future target, but only a complicated 'fitness landscape'. In this study, a significantly more sophisticated evolutionary scheme is employed tomore » produce text segments reminiscent of a Charles Dickens novel. The aggregate size of these segments is larger than the computer program and the input Dickens text, even when comparing compressed data (as a measure of information content).« less
Tan, T J; Lau, Kenneth K; Jackson, Dana; Ardley, Nicholas; Borasu, Adina
2017-04-01
The purpose of this study was to assess the efficacy of model-based iterative reconstruction (MBIR), statistical iterative reconstruction (SIR), and filtered back projection (FBP) image reconstruction algorithms in the delineation of ureters and overall image quality on non-enhanced computed tomography of the renal tracts (NECT-KUB). This was a prospective study of 40 adult patients who underwent NECT-KUB for investigation of ureteric colic. Images were reconstructed using FBP, SIR, and MBIR techniques and individually and randomly assessed by two blinded radiologists. Parameters measured were overall image quality, presence of ureteric calculus, presence of hydronephrosis or hydroureters, image quality of each ureteric segment, total length of ureters unable to be visualized, attenuation values of image noise, and retroperitoneal fat content for each patient. There were no diagnostic discrepancies between image reconstruction modalities for urolithiasis. Overall image qualities and for each ureteric segment were superior using MBIR (67.5 % rated as 'Good to Excellent' vs. 25 % in SIR and 2.5 % in FBP). The lengths of non-visualized ureteric segments were shortest using MBIR (55.0 % measured 'less than 5 cm' vs. ASIR 33.8 % and FBP 10 %). MBIR was able to reduce overall image noise by up to 49.36 % over SIR and 71.02 % over FBP. MBIR technique improves overall image quality and visualization of ureters over FBP and SIR.
A Parallel Fast Sweeping Method for the Eikonal Equation
NASA Astrophysics Data System (ADS)
Baker, B.
2017-12-01
Recently, there has been an exciting emergence of probabilistic methods for travel time tomography. Unlike gradient-based optimization strategies, probabilistic tomographic methods are resistant to becoming trapped in a local minimum and provide a much better quantification of parameter resolution than, say, appealing to ray density or performing checkerboard reconstruction tests. The benefits associated with random sampling methods however are only realized by successive computation of predicted travel times in, potentially, strongly heterogeneous media. To this end this abstract is concerned with expediting the solution of the Eikonal equation. While many Eikonal solvers use a fast marching method, the proposed solver will use the iterative fast sweeping method because the eight fixed sweep orderings in each iteration are natural targets for parallelization. To reduce the number of iterations and grid points required the high-accuracy finite difference stencil of Nobel et al., 2014 is implemented. A directed acyclic graph (DAG) is created with a priori knowledge of the sweep ordering and finite different stencil. By performing a topological sort of the DAG sets of independent nodes are identified as candidates for concurrent updating. Additionally, the proposed solver will also address scalability during earthquake relocation, a necessary step in local and regional earthquake tomography and a barrier to extending probabilistic methods from active source to passive source applications, by introducing an asynchronous parallel forward solve phase for all receivers in the network. Synthetic examples using the SEG over-thrust model will be presented.
Diversity among elephant grass genotypes using Bayesian multi-trait model.
Rossi, D A; Daher, R F; Barbé, T C; Lima, R S N; Costa, A F; Ribeiro, L P; Teodoro, P E; Bhering, L L
2017-09-27
Elephant grass is a perennial tropical grass with great potential for energy generation from biomass. The objective of this study was to estimate the genetic diversity among elephant grass accessions based on morpho-agronomic and biomass quality traits and to identify promising genotypes for obtaining hybrids with high energetic biomass production capacity. The experiment was installed at experimental area of the State Agricultural College Antônio Sarlo, in Campos dos Goytacazes. Fifty-two elephant grass genotypes were evaluated in a randomized block design with two replicates. Components of variance and the genotypic means were obtained using a Bayesian multi-trait model. We considered 350,000 iterations in the Gibbs sampler algorithm for each parameter adopted, with a warm-up period (burn-in) of 50,000 Iterations. For obtaining an uncorrelated sample, we considered five iterations (thinning) as a spacing between sampled points, which resulted in a final sample size 60,000. Subsequently, the Mahalanobis distance between each pair of genotypes was estimated. Estimates of genotypic variance indicated a favorable condition for gains in all traits. Elephant grass accessions presented greater variability for biomass quality traits, for which three groups were formed, while for the agronomic traits, two groups were formed. Crosses between Mercker Pinda México x Mercker 86-México, Mercker Pinda México x Turrialba, and Mercker 86-México x Taiwan A-25 can be carried out for obtaining elephant grass hybrids for energy purposes.
A family of small-world network models built by complete graph and iteration-function
NASA Astrophysics Data System (ADS)
Ma, Fei; Yao, Bing
2018-02-01
Small-world networks are popular in real-life complex systems. In the past few decades, researchers presented amounts of small-world models, in which some are stochastic and the rest are deterministic. In comparison with random models, it is not only convenient but also interesting to study the topological properties of deterministic models in some fields, such as graph theory, theorem computer sciences and so on. As another concerned darling in current researches, community structure (modular topology) is referred to as an useful statistical parameter to uncover the operating functions of network. So, building and studying such models with community structure and small-world character will be a demanded task. Hence, in this article, we build a family of sparse network space N(t) which is different from those previous deterministic models. Even though, our models are established in the same way as them, iterative generation. By randomly connecting manner in each time step, every resulting member in N(t) has no absolutely self-similar feature widely shared in a large number of previous models. This makes our insight not into discussing a class certain model, but into investigating a group various ones spanning a network space. Somewhat surprisingly, our results prove all members of N(t) to possess some similar characters: (a) sparsity, (b) exponential-scale feature P(k) ∼α-k, and (c) small-world property. Here, we must stress a very screming, but intriguing, phenomenon that the difference of average path length (APL) between any two members in N(t) is quite small, which indicates this random connecting way among members has no great effect on APL. At the end of this article, as a new topological parameter correlated to reliability, synchronization capability and diffusion properties of networks, the number of spanning trees on a representative member NB(t) of N(t) is studied in detail, then an exact analytical solution for its spanning trees entropy is also obtained.
Selection of stable scFv antibodies by phage display.
Brockmann, Eeva-Christine
2012-01-01
ScFv fragments are popular recombinant antibody formats but often suffer from limited stability. Phage display is a powerful tool in antibody engineering and applicable also for stability selection. ScFv variants with improved stability can be selected from large randomly mutated phage displayed libraries with a specific antigen after the unstable variants have been inactivated by heat or GdmCl. Irreversible scFv denaturation, which is a prerequisite for efficient selection, is achieved by combining denaturation with reduction of the intradomain disulfide bonds. Repeated selection cycles of increasing stringency result in enrichment of stabilized scFv fragments. Procedures for constructing a randomly mutated scFv library by error-prone PCR and phage display selection for enrichment of stable scFv antibodies from the library are described here.
Coarse-grained modeling of crystal growth and polymorphism of a model pharmaceutical molecule.
Mandal, Taraknath; Marson, Ryan L; Larson, Ronald G
2016-10-04
We describe a systematic coarse-graining method to study crystallization and predict possible polymorphs of small organic molecules. In this method, a coarse-grained (CG) force field is obtained by inverse-Boltzmann iteration from the radial distribution function of atomistic simulations of the known crystal. With the force field obtained by this method, we show that CG simulations of the drug phenytoin predict growth of a crystalline slab from a melt of phenytoin, allowing determination of the fastest-growing surface, as well as giving the correct lattice parameters and crystal morphology. By applying meta-dynamics to the coarse-grained model, a new crystalline form of phenytoin (monoclinic, space group P2 1 ) was predicted which is different from the experimentally known crystal structure (orthorhombic, space group Pna2 1 ). Atomistic simulations and quantum calculations then showed the polymorph to be meta-stable at ambient temperature and pressure, and thermodynamically more stable than the conventional orthorhombic crystal at high pressure. The results suggest an efficient route to study crystal growth of small organic molecules that could also be useful for identification of possible polymorphs as well.
Robust controller design for flexible structures using normalized coprime factor plant descriptions
NASA Technical Reports Server (NTRS)
Armstrong, Ernest S.
1993-01-01
Stabilization is a fundamental requirement in the design of feedback compensators for flexible structures. The search for the largest neighborhood around a given design plant for which a single controller produces closed-loop stability can be formulated as an H(sub infinity) control problem. The use of normalized coprime factor plant descriptions, in which the plant perturbations are defined as additive modifications to the coprime factors, leads to a closed-form expression for the maximum neighborhood boundary allowing optimal and suboptimal H(sub infinity) compensators to be computed directly without the usual gamma iteration. A summary of the theory on robust stabilization using normalized coprime factor plant descriptions is presented, and the application of the theory to the computation of robustly stable compensators for the phase version of the Control-Structures Interaction (CSI) Evolutionary Model is described. Results from the application indicate that the suboptimal version of the theory has the potential of providing the bases for the computation of low-authority compensators that are robustly stable to expected variations in design model parameters and additive unmodeled dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jungbae; Lee, Jinwoo; Na, Hyon Bin
2005-12-01
Enzymes are versatile nanoscale biocatalysts, and find increasing applications in many areas, including organic synthesis[1-3] and bioremediation.[4-5] However, the application of enzymes is often hampered by the short catalytic lifetime of enzymes and by the difficulty in recovery and recycling. To solve these problems, there have been a lot of efforts to develop effective enzyme immobilization techniques. Recent advances in nanotechnology provide more diverse materials and approaches for enzyme immobilization. For example, mesoporous materials offer potential advantages as a host of enzymes due to their well-controlled porosity and large surface area for the immobilization of enzymes.[6,7] On the other hand,more » it has been demonstrated that enzymes attached on magnetic iron oxide nanoparticles can be easily recovered using a magnet and recycled for iterative uses.[8] In this paper, we report the development of magnetically-separable and highly-stable enzyme system by the combined use of two different kinds of nanostructured materials: magnetic nanoparticles and mesoporous silica.« less
Bertrand, Michel E; Ferrari, Roberto; Remme, Willem J; Simoons, Maarten L; Fox, Kim M
2015-12-01
β-Blockers relieve angina/ischemia in stable coronary artery disease (CAD), and angiotensin-converting enzyme inhibitors prevent CAD outcomes. In EUROPA, the angiotensin-converting enzyme inhibitor perindopril reduced cardiovascular outcomes in low-risk stable CAD patients over 4.2 years. This post hoc analysis examined whether the addition of perindopril to β-blocker in EUROPA had additional benefits on outcomes compared with standard therapy including β-blocker. EUROPA was a multicenter, double-blind, placebo-controlled, randomized trial in patients with documented stable CAD. Randomized EUROPA patients who received β-blocker at baseline were identified, and the effect on cardiovascular outcomes of adding perindopril or placebo was analyzed. Endpoints were the same as those in EUROPA. At baseline, 62% (n = 7534 [3789 on perindopril and 3745 on placebo]) received β-blocker. Treatment with perindopril/β-blocker reduced the relative risk of the primary end point (cardiovascular death, nonfatal myocardial infarction, and resuscitated cardiac arrest) by 24% compared with placebo/β-blocker (HR, 0.76; 95% CI, 0.64-0.91; P = .002). Addition of perindopril also reduced fatal or nonfatal myocardial infarction by 28% (HR, 0.72; 95% CI, 0.59-0.88; P = .001) and hospitalization for heart failure by 45% (HR, 0.55; 95% CI, 0.33-0.93; P = .025). Serious adverse drug reactions were rare in both groups, and cardiovascular death and hospitalizations occurred less often with perindopril/β-blocker. The addition of perindopril to β-blocker in stable CAD patients was safe and resulted in reductions in cardiovascular outcomes and mortality compared with standard therapy including β-blocker. Copyright © 2015 Elsevier Inc. All rights reserved.
Expected distributions of root-mean-square positional deviations in proteins.
Pitera, Jed W
2014-06-19
The atom positional root-mean-square deviation (RMSD) is a standard tool for comparing the similarity of two molecular structures. It is used to characterize the quality of biomolecular simulations, to cluster conformations, and as a reaction coordinate for conformational changes. This work presents an approximate analytic form for the expected distribution of RMSD values for a protein or polymer fluctuating about a stable native structure. The mean and maximum of the expected distribution are independent of chain length for long chains and linearly proportional to the average atom positional root-mean-square fluctuations (RMSF). To approximate the RMSD distribution for random-coil or unfolded ensembles, numerical distributions of RMSD were generated for ensembles of self-avoiding and non-self-avoiding random walks. In both cases, for all reference structures tested for chains more than three monomers long, the distributions have a maximum distant from the origin with a power-law dependence on chain length. The purely entropic nature of this result implies that care must be taken when interpreting stable high-RMSD regions of the free-energy landscape as "intermediates" or well-defined stable states.
Scattering from very rough layers under the geometric optics approximation: further investigation.
Pinel, Nicolas; Bourlier, Christophe
2008-06-01
Scattering from very rough homogeneous layers is studied in the high-frequency limit (under the geometric optics approximation) by taking the shadowing effect into account. To do so, the iterated Kirchhoff approximation, recently developed by Pinel et al. [Waves Random Complex Media17, 283 (2007)] and reduced to the geometric optics approximation, is used and investigated in more detail. The contributions from the higher orders of scattering inside the rough layer are calculated under the iterated Kirchhoff approximation. The method can be applied to rough layers of either very rough or perfectly flat lower interfaces, separating either lossless or lossy media. The results are compared with the PILE (propagation-inside-layer expansion) method, recently developed by Déchamps et al. [J. Opt. Soc. Am. A23, 359 (2006)], and accelerated by the forward-backward method with spectral acceleration. They highlight that there is very good agreement between the developed method and the reference numerical method for all scattering orders and that the method can be applied to root-mean-square (RMS) heights at least down to 0.25lambda.
Probabilistic Cellular Automata
Agapie, Alexandru; Giuclea, Marius
2014-01-01
Abstract Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case—connecting the probability of a configuration in the stationary distribution to its number of zero-one borders—the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata. PMID:24999557
Probabilistic cellular automata.
Agapie, Alexandru; Andreica, Anca; Giuclea, Marius
2014-09-01
Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case-connecting the probability of a configuration in the stationary distribution to its number of zero-one borders-the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata.
A Fast Reduced Kernel Extreme Learning Machine.
Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua
2016-04-01
In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hu, B.X.; He, C.
2008-01-01
An iterative inverse method, the sequential self-calibration method, is developed for mapping spatial distribution of a hydraulic conductivity field by conditioning on nonreactive tracer breakthrough curves. A streamline-based, semi-analytical simulator is adopted to simulate solute transport in a heterogeneous aquifer. The simulation is used as the forward modeling step. In this study, the hydraulic conductivity is assumed to be a deterministic or random variable. Within the framework of the streamline-based simulator, the efficient semi-analytical method is used to calculate sensitivity coefficients of the solute concentration with respect to the hydraulic conductivity variation. The calculated sensitivities account for spatial correlations between the solute concentration and parameters. The performance of the inverse method is assessed by two synthetic tracer tests conducted in an aquifer with a distinct spatial pattern of heterogeneity. The study results indicate that the developed iterative inverse method is able to identify and reproduce the large-scale heterogeneity pattern of the aquifer given appropriate observation wells in these synthetic cases. ?? International Association for Mathematical Geology 2008.
Efficient convex-elastic net algorithm to solve the Euclidean traveling salesman problem.
Al-Mulhem, M; Al-Maghrabi, T
1998-01-01
This paper describes a hybrid algorithm that combines an adaptive-type neural network algorithm and a nondeterministic iterative algorithm to solve the Euclidean traveling salesman problem (E-TSP). It begins with a brief introduction to the TSP and the E-TSP. Then, it presents the proposed algorithm with its two major components: the convex-elastic net (CEN) algorithm and the nondeterministic iterative improvement (NII) algorithm. These two algorithms are combined into the efficient convex-elastic net (ECEN) algorithm. The CEN algorithm integrates the convex-hull property and elastic net algorithm to generate an initial tour for the E-TSP. The NII algorithm uses two rearrangement operators to improve the initial tour given by the CEN algorithm. The paper presents simulation results for two instances of E-TSP: randomly generated tours and tours for well-known problems in the literature. Experimental results are given to show that the proposed algorithm ran find the nearly optimal solution for the E-TSP that outperform many similar algorithms reported in the literature. The paper concludes with the advantages of the new algorithm and possible extensions.
Wei, Qinglai; Liu, Derong; Lin, Qiao
In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.
NASA Astrophysics Data System (ADS)
Borisov, A. A.; Deryabina, N. A.; Markovskij, D. V.
2017-12-01
Instant power is a key parameter of the ITER. Its monitoring with an accuracy of a few percent is an urgent and challenging aspect of neutron diagnostics. In a series of works published in Problems of Atomic Science and Technology, Series: Thermonuclear Fusion under a common title, the step-by-step neutronics analysis was given to substantiate a calibration technique for the DT and DD modes of the ITER. A Gauss quadrature scheme, optimal for processing "expensive" experiments, is used for numerical integration of 235U and 238U detector responses to the point sources of 14-MeV neutrons. This approach allows controlling the integration accuracy in relation to the number of coordinate mesh points and thus minimizing the number of irradiations at the given uncertainty of the full monitor response. In the previous works, responses of the divertor and blanket monitors to the isotropic point sources of DT and DD neutrons in the plasma profile and to the models of real sources were calculated within the ITER model using the MCNP code. The neutronics analyses have allowed formulating the basic principles of calibration that are optimal for having the maximum accuracy at the minimum duration of in situ experiments at the reactor. In this work, scenarios of the preliminary and basic experimental ITER runs are suggested on the basis of those principles. It is proposed to calibrate the monitors only with DT neutrons and use correction factors to the DT mode calibration for the DD mode. It is reasonable to perform full calibration only with 235U chambers and calibrate 238U chambers by responses of the 235U chambers during reactor operation (cross-calibration). The divertor monitor can be calibrated using both direct measurement of responses at the Gauss positions of a point source and simplified techniques based on the concepts of equivalent ring sources and inverse response distributions, which will considerably reduce the amount of measurements. It is shown that the monitor based on the average responses of the horizontal and vertical neutron chambers remains spatially stable as the source moves and can be used in addition to the staff monitor at neutron fluxes in the detectors four orders of magnitude lower than on the first wall, where staff detectors are located. Owing to low background, detectors of neutron chambers do not need calibration in the reactor because it is actually determination of the absolute detector efficiency for 14-MeV neutrons, which is a routine out-of-reactor procedure.
Low-dose 4D cardiac imaging in small animals using dual source micro-CT
NASA Astrophysics Data System (ADS)
Holbrook, M.; Clark, D. P.; Badea, C. T.
2018-01-01
Micro-CT is widely used in preclinical studies, generating substantial interest in extending its capabilities in functional imaging applications such as blood perfusion and cardiac function. However, imaging cardiac structure and function in mice is challenging due to their small size and rapid heart rate. To overcome these challenges, we propose and compare improvements on two strategies for cardiac gating in dual-source, preclinical micro-CT: fast prospective gating (PG) and uncorrelated retrospective gating (RG). These sampling strategies combined with a sophisticated iterative image reconstruction algorithm provide faster acquisitions and high image quality in low-dose 4D (i.e. 3D + Time) cardiac micro-CT. Fast PG is performed under continuous subject rotation which results in interleaved projection angles between cardiac phases. Thus, fast PG provides a well-sampled temporal average image for use as a prior in iterative reconstruction. Uncorrelated RG incorporates random delays during sampling to prevent correlations between heart rate and sampling rate. We have performed both simulations and animal studies to validate these new sampling protocols. Sampling times for 1000 projections using fast PG and RG were 2 and 3 min, respectively, and the total dose was 170 mGy each. Reconstructions were performed using a 4D iterative reconstruction technique based on the split Bregman method. To examine undersampling robustness, subsets of 500 and 250 projections were also used for reconstruction. Both sampling strategies in conjunction with our iterative reconstruction method are capable of resolving cardiac phases and provide high image quality. In general, for equal numbers of projections, fast PG shows fewer errors than RG and is more robust to undersampling. Our results indicate that only 1000-projection based reconstruction with fast PG satisfies a 5% error criterion in left ventricular volume estimation. These methods promise low-dose imaging with a wide range of preclinical applications in cardiac imaging.
Restoration of MRI Data for Field Nonuniformities using High Order Neighborhood Statistics
Hadjidemetriou, Stathis; Studholme, Colin; Mueller, Susanne; Weiner, Michael; Schuff, Norbert
2007-01-01
MRI at high magnetic fields (> 3.0 T ) is complicated by strong inhomogeneous radio-frequency fields, sometimes termed the “bias field”. These lead to nonuniformity of image intensity, greatly complicating further analysis such as registration and segmentation. Existing methods for bias field correction are effective for 1.5 T or 3.0 T MRI, but are not completely satisfactory for higher field data. This paper develops an effective bias field correction for high field MRI based on the assumption that the nonuniformity is smoothly varying in space. Also, nonuniformity is quantified and unmixed using high order neighborhood statistics of intensity cooccurrences. They are computed within spherical windows of limited size over the entire image. The restoration is iterative and makes use of a novel stable stopping criterion that depends on the scaled entropy of the cooccurrence statistics, which is a non monotonic function of the iterations; the Shannon entropy of the cooccurrence statistics normalized to the effective dynamic range of the image. The algorithm restores whole head data, is robust to intense nonuniformities present in high field acquisitions, and is robust to variations in anatomy. This algorithm significantly improves bias field correction in comparison to N3 on phantom 1.5 T head data and high field 4 T human head data. PMID:18193095
Truncation-based energy weighting string method for efficiently resolving small energy barriers
NASA Astrophysics Data System (ADS)
Carilli, Michael F.; Delaney, Kris T.; Fredrickson, Glenn H.
2015-08-01
The string method is a useful numerical technique for resolving minimum energy paths in rare-event barrier-crossing problems. However, when applied to systems with relatively small energy barriers, the string method becomes inconvenient since many images trace out physically uninteresting regions where the barrier has already been crossed and recrossing is unlikely. Energy weighting alleviates this difficulty to an extent, but typical implementations still require the string's endpoints to evolve to stable states that may be far from the barrier, and deciding upon a suitable energy weighting scheme can be an iterative process dependent on both the application and the number of images used. A second difficulty arises when treating nucleation problems: for later images along the string, the nucleus grows to fill the computational domain. These later images are unphysical due to confinement effects and must be discarded. In both cases, computational resources associated with unphysical or uninteresting images are wasted. We present a new energy weighting scheme that eliminates all of the above difficulties by actively truncating the string as it evolves and forcing all images, including the endpoints, to remain within and cover uniformly a desired barrier region. The calculation can proceed in one step without iterating on strategy, requiring only an estimate of an energy value below which images become uninteresting.
Surface topography estimated by inversion of satellite gravity gradiometry observations
NASA Astrophysics Data System (ADS)
Ramillien, Guillaume
2015-04-01
An integration of mass elements is presented for evaluating the six components of the 2-order gravity tensor (i.e., second derivatives of the Newtonian mass integral for the gravitational potential) created by an uneven sphere topography consisting of juxtaposed vertical prisms. The method is based on Legendre polynomial series with the originality of taking elastic compensation of the topography by the Earth's surface into account. The speed of computation of the polynomial series increases logically with the observing altitude from the source of anomaly. Such a forward modelling can be easily used for reduction of observed gravity gradient anomalies by the effects of any spherical interface of density. Moreover, an iterative least-square inversion of the observed gravity tensor values Γαβ is proposed to estimate a regional set of topographic heights. Several tests of recovery have been made by considering simulated gradiometry anomaly data, and for varying satellite altitudes and a priori levels of accuracy. In the case of GOCE-type gradiometry anomalies measured at an altitude of ~300 km, the search converges down to a stable and smooth topography after 20-30 iterations while the final r.m.s. error is ~100 m. The possibility of cumulating satellite information from different orbit geometries is also examined for improving the prediction.
Wang, Chang; Ren, Qiongqiong; Qin, Xin
2018-01-01
Diffeomorphic demons can guarantee smooth and reversible deformation and avoid unreasonable deformation. However, the number of iterations needs to be set manually, and this greatly influences the registration result. In order to solve this problem, we proposed adaptive diffeomorphic multiresolution demons in this paper. We used an optimized framework with nonrigid registration and diffeomorphism strategy, designed a similarity energy function based on grey value, and stopped iterations adaptively. This method was tested by synthetic image and same modality medical image. Large deformation was simulated by rotational distortion and extrusion transform, medical image registration with large deformation was performed, and quantitative analyses were conducted using the registration evaluation indexes, and the influence of different driving forces and parameters on the registration result was analyzed. The registration results of same modality medical images were compared with those obtained using active demons, additive demons, and diffeomorphic demons. Quantitative analyses showed that the proposed method's normalized cross-correlation coefficient and structural similarity were the highest and mean square error was the lowest. Medical image registration with large deformation could be performed successfully; evaluation indexes remained stable with an increase in deformation strength. The proposed method is effective and robust, and it can be applied to nonrigid registration of same modality medical images with large deformation.
Wang, Chang; Ren, Qiongqiong; Qin, Xin; Yu, Yi
2018-01-01
Diffeomorphic demons can guarantee smooth and reversible deformation and avoid unreasonable deformation. However, the number of iterations needs to be set manually, and this greatly influences the registration result. In order to solve this problem, we proposed adaptive diffeomorphic multiresolution demons in this paper. We used an optimized framework with nonrigid registration and diffeomorphism strategy, designed a similarity energy function based on grey value, and stopped iterations adaptively. This method was tested by synthetic image and same modality medical image. Large deformation was simulated by rotational distortion and extrusion transform, medical image registration with large deformation was performed, and quantitative analyses were conducted using the registration evaluation indexes, and the influence of different driving forces and parameters on the registration result was analyzed. The registration results of same modality medical images were compared with those obtained using active demons, additive demons, and diffeomorphic demons. Quantitative analyses showed that the proposed method's normalized cross-correlation coefficient and structural similarity were the highest and mean square error was the lowest. Medical image registration with large deformation could be performed successfully; evaluation indexes remained stable with an increase in deformation strength. The proposed method is effective and robust, and it can be applied to nonrigid registration of same modality medical images with large deformation.
Design concept of a cryogenic distillation column cascade for a ITER scale fusion reactor
NASA Astrophysics Data System (ADS)
Yamanishi, Toshihiko; Enoeda, Mikio; Okuno, Kenji
1994-07-01
A column cascade has been proposed for the fuel cycle of a ITER scale fusion reactor. The proposed cascade consists of three columns and has significant features: either top or bottom product is prior to the other for each column; it is avoided to withdraw side streams as products or feeds of down stream columns; and there is no recycle steam between the columns. In addition, the product purity of the cascade can be maintained against the changes of flow rates and compositions of feed streams just by adjusting the top and bottom flow rates. The control system has been designed for each column in the cascade. A key component in the prior product stream was selected, and the analysis method of this key component was proposed. The designed control system never brings instability as long as the concentration of the key component is measured with negligible time lag. The time lag for the measurement considerably affects the stability of the control system. A significant conclusion by the simulation in this work is that permissible time for the measurement is about 0.5 hour to obtain stable control. Hence, the analysis system using the gas chromatography is valid for control of the columns.
Axisymmetric Vortices with Swirl
NASA Astrophysics Data System (ADS)
Elcrat, A.
2007-11-01
This talk is concerned with finding solutions of the Euler equations by solving elliptic boundary value problems for the Bragg-Hawthorne equation L u= -urr -(1/r)ur - = r^2f (u) + h(u). Theoretical results have been given for previously (Elcrat and Miller, Differential and Integral Equations 16(4) 2003, 949-968) for problems with swirl and general classes of profile functions f, h by iterating Lu(n+1)= rf(u)n)) + h(u(n)), and showing u(n) converges montonically to a solution. The solutions obtained depend on the initial guess, which can be thought of as prescribing level sets of the vortex. When a computational program was attempted these monotone iterations turned out to be numerically unstable, and a stable computation was acheived by fixing the moment of the cross section of a vortex in the merideanal plane. (This generalizes previous computational results in Elcrat, Fornberg and Miller, JFM 433 2001, (315-328) We obtain famillies of vortices related to vortex rings with swirl, Moffatt's generalization of Hill's vortex and tubes of vorticity with swirl wrapped around the symmetry axis. The vortices are embedded in either an irrotational flow or a flow with shear, and we deal with the transition form no swirl in the vortex to flow with only swirl, a Beltrami flow.
NASA Astrophysics Data System (ADS)
Endelt, B.
2017-09-01
Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.
Studies in astronomical time series analysis: Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1979-01-01
Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.
NASA Technical Reports Server (NTRS)
Levy, L. L., Jr.; Burns, R. K.
1972-01-01
A theoretical investigation has been made to design an isotope heat source capable of satisfying the conflicting thermal requirements of steady-state operation and atmosphere entry. The isotope heat source must transfer heat efficiently to a heat exchange during normal operation with a power system in space, and in the event of a mission abort, it must survive the thermal environment of atmosphere entry and ground impact without releasing radioactive material. A successful design requires a compatible integration of the internal components of the heat source with the external aerodynamic shape. To this end, configurational, aerodynamic, motion, and thermal analyses were coupled and iterated during atmosphere entries at suborbital through superorbital velocities at very shallow and very steep entry angles. Results indicate that both thermal requirements can be satisfied by a heat source which has a single stable aerodynamic orientation at hypersonic speeds. For such a design, the insulation material required to adequately protect the isotope fuel from entry heating need extend only half way around the fuel capsule on the aerodynamically stable (wind-ward) side of the heat source. Thus, a low-thermal-resistance, conducting heat path is provided on the opposite side of the heat source through which heat can be transferred to an adjacent heat exchanger during normal operation without exceeding specified temperature limits.
Prunuske, Amy J; Henn, Lisa; Brearley, Ann M; Prunuske, Jacob
Medical education increasingly involves online learning experiences to facilitate the standardization of curriculum across time and space. In class, delivering material by lecture is less effective at promoting student learning than engaging students in active learning experience and it is unclear whether this difference also exists online. We sought to evaluate medical student preferences for online lecture or online active learning formats and the impact of format on short- and long-term learning gains. Students participated online in either lecture or constructivist learning activities in a first year neurologic sciences course at a US medical school. In 2012, students selected which format to complete and in 2013, students were randomly assigned in a crossover fashion to the modules. In the first iteration, students strongly preferred the lecture modules and valued being told "what they need to know" rather than figuring it out independently. In the crossover iteration, learning gains and knowledge retention were found to be equivalent regardless of format, and students uniformly demonstrated a strong preference for the lecture format, which also on average took less time to complete. When given a choice for online modules, students prefer passive lecture rather than completing constructivist activities, and in the time-limited environment of medical school, this choice results in similar performance on multiple-choice examinations with less time invested. Instructors need to look more carefully at whether assessments and learning strategies are helping students to obtain self-directed learning skills and to consider strategies to help students learn to value active learning in an online environment.
Wang, Xiong; Zheng, Kai; Zheng, Huayu; Nie, Hongli; Yang, Zujun; Tang, Lixia
2014-12-20
Iterative saturation mutagenesis (ISM) has been shown to be a powerful method for directed evolution. In this study, the approach was modified (termed M-ISM) by combining the single-site saturation mutagenesis method with a DC-Analyzer-facilitated combinatorial strategy, aiming to evolve novel biocatalysts efficiently in the case where multiple sites are targeted simultaneously. Initially, all target sites were explored individually by constructing single-site saturation mutagenesis libraries. Next, the top two to four variants in each library were selected and combined using the DC-Analyzer-facilitated combinatorial strategy. In addition to site-saturation mutagenesis, iterative saturation mutagenesis also needed to be performed. The advantages of M-ISM over ISM were that the screening effort is greatly reduced, and the entire M-ISM procedure was less time-consuming. The M-ISM strategy was successfully applied to the randomization of halohydrin dehalogenase from Agrobacterium radiobacter AD1 (HheC) when five interesting sites were targeted simultaneously. After screening 900 clones in total, six positive mutants were obtained. These mutants exhibited 4.0- to 9.3-fold higher k(cat) values than did the wild-type HheC toward 1,3-dichloro-2-propanol. However, with the ISM strategy, the best hit showed a 5.9-fold higher k(cat) value toward 1,3-DCP than the wild-type HheC, which was obtained after screening 4000 clones from four rounds of mutagenesis. Therefore, M-ISM could serve as a simple and efficient version of ISM for the randomization of target genes with multiple positions of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlier, Thomas, E-mail: thomas.carlier@chu-nantes.fr; Willowson, Kathy P.; Fourkal, Eugene
Purpose: {sup 90}Y -positron emission tomography (PET) imaging is becoming a recognized modality for postinfusion quantitative assessment following radioembolization therapy. However, the extremely low counts and high random fraction associated with {sup 90}Y -PET may significantly impair both qualitative and quantitative results. The aim of this work was to study image quality and noise level in relation to the quantification and bias performance of two types of Siemens PET scanners when imaging {sup 90}Y and to compare experimental results with clinical data from two types of commercially available {sup 90}Y microspheres. Methods: Data were acquired on both Siemens Biograph TruePointmore » [non-time-of-flight (TOF)] and Biograph microcomputed tomography (mCT) (TOF) PET/CT scanners. The study was conducted in three phases. The first aimed to assess quantification and bias for different reconstruction methods according to random fraction and number of true counts in the scan. The NEMA 1994 PET phantom was filled with water with one cylindrical insert left empty (air) and the other filled with a solution of {sup 90}Y . The phantom was scanned for 60 min in the PET/CT scanner every one or two days. The second phase used the NEMA 2001 PET phantom to derive noise and image quality metrics. The spheres and the background were filled with a {sup 90}Y solution in an 8:1 contrast ratio and four 30 min acquisitions were performed over a one week period. Finally, 32 patient data (8 treated with Therasphere{sup ®} and 24 with SIR-Spheres{sup ®}) were retrospectively reconstructed and activity in the whole field of view and the liver was compared to theoretical injected activity. Results: The contribution of both bremsstrahlung and LSO trues was found to be negligible, allowing data to be decay corrected to obtain correct quantification. In general, the recovered activity for all reconstruction methods was stable over the range studied, with a small bias appearing at extremely high random fraction and low counts for iterative algorithms. Point spread function (PSF) correction and TOF reconstruction in general reduce background variability and noise and increase recovered concentration. Results for patient data indicated a good correlation between the expected and PET reconstructed activities. A linear relationship between the expected and the measured activities in the organ of interest was observed for all reconstruction method used: a linearity coefficient of 0.89 ± 0.05 for the Biograph mCT and 0.81 ± 0.05 for the Biograph TruePoint. Conclusions: Due to the low counts and high random fraction, accurate image quantification of {sup 90}Y during selective internal radionuclide therapy is affected by random coincidence estimation, scatter correction, and any positivity constraint of the algorithm. Nevertheless, phantom and patient studies showed that the impact of number of true and random coincidences on quantitative results was found to be limited as long as ordinary Poisson ordered subsets expectation maximization reconstruction algorithms with random smoothing are used. Adding PSF correction and TOF information to the reconstruction greatly improves the image quality in terms of bias, variability, noise reduction, and detectability. On the patient studies, the total activity in the field of view is in general accurately measured by Biograph mCT and slightly overestimated by the Biograph TruePoint.« less
Fast time- and frequency-domain finite-element methods for electromagnetic analysis
NASA Astrophysics Data System (ADS)
Lee, Woochan
Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.
Minimalist design of a robust real-time quantum random number generator
NASA Astrophysics Data System (ADS)
Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.
2015-08-01
We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.
Kumarasamy, Nagalingeswaran; Poongulali, Selvamuthu; Bollaerts, Anne; Moris, Philippe; Beulah, Faith Esther; Ayuk, Leo Njock; Demoitié, Marie-Ange; Jongert, Erik; Ofori-Anyinam, Opokua
2016-01-01
Human immunodeficiency virus (HIV)-associated tuberculosis is a major public health threat. We evaluated the safety and immunogenicity of the candidate tuberculosis vaccine M72/AS01 in HIV-positive and HIV-negative Indian adults.Randomized, controlled observer-blind trial (NCT01262976).We assigned 240 adults (1:1:1) to antiretroviral therapy (ART)-stable, ART-naive, or HIV-negative cohorts. Cohorts were randomized 1:1 to receive M72/AS01 or placebo following a 0, 1-month schedule and followed for 12 months (time-point M13). HIV-specific and laboratory safety parameters, adverse events (AEs), and M72-specific T-cell-mediated and humoral responses were evaluated.Subjects were predominantly QuantiFERON-negative (60%) and Bacille Calmette-Guérin-vaccinated (73%). Seventy ART-stable, 73 ART-naive, and 60 HIV-negative subjects completed year 1. No vaccine-related serious AEs or ART-regimen adjustments, or clinically relevant effects on laboratory parameters, HIV-1 viral loads or CD4 counts were recorded. Two ART-naive vaccinees died of vaccine-unrelated diseases. M72/AS01 induced polyfunctional M72-specific CD4 T-cell responses (median [interquartile range] at 7 days postdose 2: ART-stable, 0.9% [0.7-1.5]; ART-naive, 0.5% [0.2-1.0]; and HIV-negative, 0.6% [0.4-1.1]), persisting at M13 (0.4% [0.2-0.5], 0.09% [0.04-0.2], and 0.1% [0.09-0.2], respectively). Median responses were higher in the ART-stable cohort versus ART-naive cohort from day 30 onwards (P ≤ 0.015). Among HIV-positive subjects (irrespective of ART-status), median responses were higher in QuantiFERON-positive versus QuantiFERON-negative subjects up to day 30 (P ≤ 0.040), but comparable thereafter. Cytokine-expression profiles were comparable between cohorts after dose 2. At M13, M72-specific IgG responses were higher in ART-stable and HIV-negative vaccinees versus ART-naive vaccinees (P ≤ 0.001).M72/AS01 was well-tolerated and immunogenic in this population of ART-stable and ART-naive HIV-positive adults and HIV-negative adults, supporting further clinical evaluation.
Kumarasamy, Nagalingeswaran; Poongulali, Selvamuthu; Bollaerts, Anne; Moris, Philippe; Beulah, Faith Esther; Ayuk, Leo Njock; Demoitié, Marie-Ange; Jongert, Erik; Ofori-Anyinam, Opokua
2016-01-01
Abstract Human immunodeficiency virus (HIV)-associated tuberculosis is a major public health threat. We evaluated the safety and immunogenicity of the candidate tuberculosis vaccine M72/AS01 in HIV-positive and HIV-negative Indian adults. Randomized, controlled observer-blind trial (NCT01262976). We assigned 240 adults (1:1:1) to antiretroviral therapy (ART)-stable, ART-naive, or HIV-negative cohorts. Cohorts were randomized 1:1 to receive M72/AS01 or placebo following a 0, 1-month schedule and followed for 12 months (time-point M13). HIV-specific and laboratory safety parameters, adverse events (AEs), and M72-specific T-cell-mediated and humoral responses were evaluated. Subjects were predominantly QuantiFERON-negative (60%) and Bacille Calmette–Guérin-vaccinated (73%). Seventy ART-stable, 73 ART-naive, and 60 HIV-negative subjects completed year 1. No vaccine-related serious AEs or ART-regimen adjustments, or clinically relevant effects on laboratory parameters, HIV-1 viral loads or CD4 counts were recorded. Two ART-naive vaccinees died of vaccine-unrelated diseases. M72/AS01 induced polyfunctional M72-specific CD4+ T-cell responses (median [interquartile range] at 7 days postdose 2: ART-stable, 0.9% [0.7–1.5]; ART-naive, 0.5% [0.2–1.0]; and HIV-negative, 0.6% [0.4–1.1]), persisting at M13 (0.4% [0.2–0.5], 0.09% [0.04–0.2], and 0.1% [0.09–0.2], respectively). Median responses were higher in the ART-stable cohort versus ART-naive cohort from day 30 onwards (P ≤ 0.015). Among HIV-positive subjects (irrespective of ART-status), median responses were higher in QuantiFERON-positive versus QuantiFERON-negative subjects up to day 30 (P ≤ 0.040), but comparable thereafter. Cytokine-expression profiles were comparable between cohorts after dose 2. At M13, M72-specific IgG responses were higher in ART-stable and HIV-negative vaccinees versus ART-naive vaccinees (P ≤ 0.001). M72/AS01 was well-tolerated and immunogenic in this population of ART-stable and ART-naive HIV-positive adults and HIV-negative adults, supporting further clinical evaluation. PMID:26817879
Liu, Xin; Zhang, Hui; Feng, Hua; Hong, Lei; Wang, Xue-Song; Song, Guan-Yang
2017-04-01
A special type of meniscal lesion involving the peripheral attachment of the posterior horn of the medial meniscus (PHMM), termed a "ramp lesion," is commonly associated with an anterior cruciate ligament (ACL) injury. However, its treatment is still controversial. Recently, stable ramp lesions treated with abrasion and trephination alone have been shown to have good clinical outcomes after ACL reconstruction. Stable ramp lesions treated with abrasion and trephination alone during ACL reconstruction will result in similar clinical outcomes compared with those treated with surgical repair. Randomized controlled trial; Level of evidence, 2. A prospective randomized controlled study was performed in 91 consecutive patients who had complete ACL injuries and concomitant stable ramp lesions of the medial meniscus. All patients were randomly allocated to 1 of 2 groups based on whether the stable ramp lesions were surgically repaired (study group; n = 50) or only abraded and trephined (control group; n = 41) during ACL reconstruction. All surgical procedures were performed by a single surgeon who was blinded to the functional assessment findings of the patients. The Lysholm score, subjective International Knee Documentation Committee (IKDC) score, and stability assessments (pivot-shift test, Lachman test, KT-1000 arthrometer side-to-side difference, and KT-1000 arthrometer differences of <3, 3-5, and >5 mm) were evaluated preoperatively and at the last follow-up. Moreover, magnetic resonance imaging (MRI) was used to evaluate the healing status of the ramp lesions. All consecutive patients who were screened for eligibility from August 2008 to April 2012 were enrolled and observed clinically. There were 40 patients in the study group and 33 patients in the control group who were observed for at least 2 years. At the final follow-up, there were no significant differences between the study group and the control group in terms of the mean Lysholm score (88.7 ± 4.8 vs 90.4 ± 5.8, respectively; P = .528), mean subjective IKDC score (83.6 ± 3.7 vs 82.2 ± 4.5, respectively; P = .594), pivot-shift test results ( P = .658), Lachman test results ( P = .525), KT-1000 arthrometer side-to-side difference (1.6 ± 1.2 vs 1.5 ± 1.1, respectively; P = .853), or KT-1000 arthrometer grading ( P = .738). Overall, for both groups (n = 73), 67 patients showed completely healed (38 study, 29 control), 3 showed partially healed (1 study, 2 control), and 3 showed nonhealed (1 study, 2 control) signals on follow-up MRI when evaluating the healing status of the ramp lesions. There was no significant difference regarding the healing status of the ramp lesions between the 2 groups ( P = .543). This prospective randomized controlled trial showed that, in terms of subjective scores, knee stability, and meniscal healing status, concomitant stable ramp lesions of the medial meniscus treated with abrasion and trephination alone during ACL reconstruction resulted in similar clinical outcomes compared with those treated with surgical repair.
Mid-infrared optical parametric oscillator pumped by an amplified random fiber laser
NASA Astrophysics Data System (ADS)
Shang, Yaping; Shen, Meili; Wang, Peng; Li, Xiao; Xu, Xiaojun
2017-01-01
Recently, the concept of random fiber lasers has attracted a great deal of attention for its feature to generate incoherent light without a traditional laser resonator, which is free of mode competition and insure the stationary narrow-band continuous modeless spectrum. In this Letter, we reported the first, to the best of our knowledge, optical parametric oscillator (OPO) pumped by an amplified 1070 nm random fiber laser (RFL), in order to generate stationary mid-infrared (mid-IR) laser. The experiment realized a watt-level laser output in the mid-IR range and operated relatively stable. The use of the RFL seed source allowed us to take advantage of its respective stable time-domain characteristics. The beam profile, spectrum and time-domain properties of the signal light were measured to analyze the process of frequency down-conversion process under this new pumping condition. The results suggested that the near-infrared (near-IR) signal light `inherited' good beam performances from the pump light. Those would be benefit for further develop about optical parametric process based on different pumping circumstances.
Random matrix approach to cross correlations in financial data
NASA Astrophysics Data System (ADS)
Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene
2002-06-01
We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices
Some functional limit theorems for compound Cox processes
NASA Astrophysics Data System (ADS)
Korolev, Victor Yu.; Chertok, A. V.; Korchagin, A. Yu.; Kossova, E. V.; Zeifman, Alexander I.
2016-06-01
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
Some functional limit theorems for compound Cox processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korolev, Victor Yu.; Institute of Informatics Problems FRC CSC RAS; Chertok, A. V.
2016-06-08
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
Cochlea segmentation using iterated random walks with shape prior
NASA Astrophysics Data System (ADS)
Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Vera, Sergio; Ceresa, Mario; González Ballester, Miguel Ángel
2016-03-01
Cochlear implants can restore hearing to deaf or partially deaf patients. In order to plan the intervention, a model from high resolution µCT images is to be built from accurate cochlea segmentations and then, adapted to a patient-specific model. Thus, a precise segmentation is required to build such a model. We propose a new framework for segmentation of µCT cochlear images using random walks where a region term is combined with a distance shape prior weighted by a confidence map to adjust its influence according to the strength of the image contour. Then, the region term can take advantage of the high contrast between the background and foreground and the distance prior guides the segmentation to the exterior of the cochlea as well as to less contrasted regions inside the cochlea. Finally, a refinement is performed preserving the topology using a topological method and an error control map to prevent boundary leakage. We tested the proposed approach with 10 datasets and compared it with the latest techniques with random walks and priors. The experiments suggest that this method gives promising results for cochlea segmentation.
Markov random field model-based edge-directed image interpolation.
Li, Min; Nguyen, Truong Q
2008-07-01
This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.
Development Testing and Subsequent Failure Investigation of a Spring Strut Mechanism
NASA Technical Reports Server (NTRS)
Dervan, Jared; Robertson, Brandon; Staab, Lucas; Culberson, Michael
2014-01-01
Commodities are transferred between the Multi-Purpose Crew Vehicle (MPCV) crew module (CM) and service module (SM) via an external umbilical that is driven apart with spring-loaded struts after the structural connection is severed. The spring struts must operate correctly for the modules to separate safely. There was no vibration testing of strut development units scoped in the MPCV Program Plan; therefore, any design problems discovered as a result of vibration testing would not have been found until the component qualification. The NASA Engineering and Safety Center (NESC) and Lockheed Martin (LM) performed random vibration testing on a single spring strut development unit to assess its ability to withstand qualification level random vibration environments. Failure of the strut while exposed to random vibration resulted in a follow-on failure investigation, design changes, and additional development tests. This paper focuses on the results of the failure investigations including identified lessons learned and best practices to aid in future design iterations of the spring strut and to help other mechanism developers avoid similar pitfalls.
NASA Astrophysics Data System (ADS)
Lu, Wei; Tan, Jinglu; Floyd, Randall C.
2005-04-01
Object detection in ultrasound fetal images is a challenging task for the relatively low resolution and low signal-to-noise ratio. A direct inverse randomized Hough transform (DIRHT) is developed for filtering and detecting incomplete curves in images with strong noise. The DIRHT combines the advantages of both the inverse and the randomized Hough transforms. In the reverse image, curves are highlighted while a large number of unrelated pixels are removed, demonstrating a "curve-pass filtering" effect. Curves are detected by iteratively applying the DIRHT to the filtered image. The DIRHT was applied to head detection and measurement of the biparietal diameter (BPD) and head circumference (HC). No user input or geometric properties of the head were required for the detection. The detection and measurement took 2 seconds for each image on a PC. The inter-run variations and the differences between the automatic measurements and sonographers" manual measurements were small compared with published inter-observer variations. The results demonstrated that the automatic measurements were consistent and accurate. This method provides a valuable tool for fetal examinations.
Hierarchical Solution of the Traveling Salesman Problem with Random Dyadic Tilings
NASA Astrophysics Data System (ADS)
Kalmár-Nagy, Tamás; Bak, Bendegúz Dezső
We propose a hierarchical heuristic approach for solving the Traveling Salesman Problem (TSP) in the unit square. The points are partitioned with a random dyadic tiling and clusters are formed by the points located in the same tile. Each cluster is represented by its geometrical barycenter and a “coarse” TSP solution is calculated for these barycenters. Midpoints are placed at the middle of each edge in the coarse solution. Near-optimal (or optimal) minimum tours are computed for each cluster. The tours are concatenated using the midpoints yielding a solution for the original TSP. The method is tested on random TSPs (independent, identically distributed points in the unit square) up to 10,000 points as well as on a popular benchmark problem (att532 — coordinates of 532 American cities). Our solutions are 8-13% longer than the optimal ones. We also present an optimization algorithm for the partitioning to improve our solutions. This algorithm further reduces the solution errors (by several percent using 1000 iteration steps). The numerical experiments demonstrate the viability of the approach.
Wakelee, Heather A.; Lee, Ju-Whei; Hanna, Nasser H.; Traynor, Anne M.; Carbone, David P.; Schiller, Joan H.
2012-01-01
Introduction Sorafenib is a raf kinase and angiogenesis inhibitor with activity in multiple cancers. This phase II study in heavily pretreated non-small cell lung cancer (NSCLC) patients (≥ two prior therapies) utilized a randomized discontinuation design. Methods Patients received 400 mg of sorafenib orally twice daily for two cycles (two months) (Step 1). Responding patients on Step 1 continued on sorafenib; progressing patients went off study, and patients with stable disease were randomized to placebo or sorafenib (Step 2), with crossover from placebo allowed upon progression. The primary endpoint of this study was the proportion of patients having stable or responding disease two months after randomization. Results : There were 299 patients evaluated for Step 1 with 81 eligible patients randomized on Step 2 who received sorafenib (n=50) or placebo (n=31). The two-month disease control rates following randomization were 54% and 23% for patients initially receiving sorafenib and placebo respectively, p=0.005. The hazard ratio for progression on Step 2 was 0.51 (95% CI 0.30, 0.87, p=0.014) favoring sorafenib. A trend in favor of overall survival with sorafenib was also observed (13.7 versus 9.0 months from time of randomization), HR 0.67 (95% CI 0.40-1.11), p=0.117. A dispensing error occurred which resulted in unblinding of some patients, but not before completion of the 8 week initial step 2 therapy. Toxicities were manageable and as expected. Conclusions : The results of this randomized discontinuation trial suggest that sorafenib has single agent activity in a heavily pretreated, enriched patient population with advanced NSCLC. These results support further investigation with sorafenib as a single agent in larger, randomized studies in NSCLC. PMID:22982658
A mixed-effects regression model for longitudinal multivariate ordinal data.
Liu, Li C; Hedeker, Donald
2006-03-01
A mixed-effects item response theory model that allows for three-level multivariate ordinal outcomes and accommodates multiple random subject effects is proposed for analysis of multivariate ordinal outcomes in longitudinal studies. This model allows for the estimation of different item factor loadings (item discrimination parameters) for the multiple outcomes. The covariates in the model do not have to follow the proportional odds assumption and can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is proposed utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher scoring solution, which provides standard errors for all model parameters, is used. An analysis of a longitudinal substance use data set, where four items of substance use behavior (cigarette use, alcohol use, marijuana use, and getting drunk or high) are repeatedly measured over time, is used to illustrate application of the proposed model.
An Interactive Image Segmentation Method in Hand Gesture Recognition
Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai
2017-01-01
In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818
Online Distributed Learning Over Networks in RKH Spaces Using Random Fourier Features
NASA Astrophysics Data System (ADS)
Bouboulis, Pantelis; Chouvardas, Symeon; Theodoridis, Sergios
2018-04-01
We present a novel diffusion scheme for online kernel-based learning over networks. So far, a major drawback of any online learning algorithm, operating in a reproducing kernel Hilbert space (RKHS), is the need for updating a growing number of parameters as time iterations evolve. Besides complexity, this leads to an increased need of communication resources, in a distributed setting. In contrast, the proposed method approximates the solution as a fixed-size vector (of larger dimension than the input space) using Random Fourier Features. This paves the way to use standard linear combine-then-adapt techniques. To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented. Conditions for asymptotic convergence and boundness of the networkwise regret are also provided. The simulated tests illustrate the performance of the proposed scheme.
Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alcock, Simon G., E-mail: simon.alcock@diamond.ac.uk; Nistea, Ioana; Sawhney, Kawal
2016-05-15
We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM’s autocollimator adds intomore » the overall measured value of the mirror’s slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.« less
Pseudo-orthogonalization of memory patterns for associative memory.
Oku, Makito; Makino, Takaki; Aihara, Kazuyuki
2013-11-01
A new method for improving the storage capacity of associative memory models on a neural network is proposed. The storage capacity of the network increases in proportion to the network size in the case of random patterns, but, in general, the capacity suffers from correlation among memory patterns. Numerous solutions to this problem have been proposed so far, but their high computational cost limits their scalability. In this paper, we propose a novel and simple solution that is locally computable without any iteration. Our method involves XNOR masking of the original memory patterns with random patterns, and the masked patterns and masks are concatenated. The resulting decorrelated patterns allow higher storage capacity at the cost of the pattern length. Furthermore, the increase in the pattern length can be reduced through blockwise masking, which results in a small amount of capacity loss. Movie replay and image recognition are presented as examples to demonstrate the scalability of the proposed method.
Decomposition of conditional probability for high-order symbolic Markov chains.
Melnik, S S; Usatenko, O V
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Decomposition of conditional probability for high-order symbolic Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Randomized interpolative decomposition of separated representations
NASA Astrophysics Data System (ADS)
Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory
2015-01-01
We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.
System matrix computation vs storage on GPU: A comparative study in cone beam CT.
Matenine, Dmitri; Côté, Geoffroi; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe
2018-02-01
Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersection distances between the trajectories of photons and the object, also called ray tracing or system matrix computation. This work focused on the thin-ray model is aimed at comparing different system matrix handling strategies using graphical processing units (GPUs). In this work, the system matrix is modeled by thin rays intersecting a regular grid of box-shaped voxels, known to be an accurate representation of the forward projection operator in CT. However, an uncompressed system matrix exceeds the random access memory (RAM) capacities of typical computers by one order of magnitude or more. Considering the RAM limitations of GPU hardware, several system matrix handling methods were compared: full storage of a compressed system matrix, on-the-fly computation of its coefficients, and partial storage of the system matrix with partial on-the-fly computation. These methods were tested on geometries mimicking a cone beam CT (CBCT) acquisition of a human head. Execution times of three routines of interest were compared: forward projection, backprojection, and ordered-subsets convex (OSC) iteration. A fully stored system matrix yielded the shortest backprojection and OSC iteration times, with a 1.52× acceleration for OSC when compared to the on-the-fly approach. Nevertheless, the maximum problem size was bound by the available GPU RAM and geometrical symmetries. On-the-fly coefficient computation did not require symmetries and was shown to be the fastest for forward projection. It also offered reasonable execution times of about 176.4 ms per view per OSC iteration for a detector of 512 × 448 pixels and a volume of 384 3 voxels, using commodity GPU hardware. Partial system matrix storage has shown a performance similar to the on-the-fly approach, while still relying on symmetries. Partial system matrix storage was shown to yield the lowest relative performance. On-the-fly ray tracing was shown to be the most flexible method, yielding reasonable execution times. A fully stored system matrix allowed for the lowest backprojection and OSC iteration times and may be of interest for certain performance-oriented applications. © 2017 American Association of Physicists in Medicine.
Multigrid and Krylov Subspace Methods for the Discrete Stokes Equations
NASA Technical Reports Server (NTRS)
Elman, Howard C.
1996-01-01
Discretization of the Stokes equations produces a symmetric indefinite system of linear equations. For stable discretizations, a variety of numerical methods have been proposed that have rates of convergence independent of the mesh size used in the discretization. In this paper, we compare the performance of four such methods: variants of the Uzawa, preconditioned conjugate gradient, preconditioned conjugate residual, and multigrid methods, for solving several two-dimensional model problems. The results indicate that where it is applicable, multigrid with smoothing based on incomplete factorization is more efficient than the other methods, but typically by no more than a factor of two. The conjugate residual method has the advantage of being both independent of iteration parameters and widely applicable.
NASA Astrophysics Data System (ADS)
Prigozhin, Leonid; Sokolovsky, Vladimir
2018-05-01
We consider the fast Fourier transform (FFT) based numerical method for thin film magnetization problems (Vestgården and Johansen 2012 Supercond. Sci. Technol. 25 104001), compare it with the finite element methods, and evaluate its accuracy. Proposed modifications of this method implementation ensure stable convergence of iterations and enhance its efficiency. A new method, also based on the FFT, is developed for 3D bulk magnetization problems. This method is based on a magnetic field formulation, different from the popular h-formulation of eddy current problems typically employed with the edge finite elements. The method is simple, easy to implement, and can be used with a general current–voltage relation; its efficiency is illustrated by numerical simulations.
A modified Dodge algorithm for the parabolized Navier-Stokes equation and compressible duct flows
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1981-01-01
A revised version of Dodge's split-velocity method for numerical calculation of compressible duct flow was developed. The revision incorporates balancing of mass flow rates on each marching step in order to maintain front-to-back continuity during the calculation. The (checkerboard) zebra algorithm is applied to solution of the three dimensional continuity equation in conservative form. A second-order A-stable linear multistep method is employed in effecting a marching solution of the parabolized momentum equations. A checkerboard iteration is used to solve the resulting implicit nonlinear systems of finite-difference equations which govern stepwise transition. Qualitive agreement with analytical predictions and experimental results was obtained for some flows with well-known solutions.
Motor–sensory convergence in object localization: a comparative study in rats and humans
Horev, Guy; Saig, Avraham; Knutsen, Per Magne; Pietr, Maciej; Yu, Chunxiu; Ahissar, Ehud
2011-01-01
In order to identify basic aspects in the process of tactile perception, we trained rats and humans in similar object localization tasks and compared the strategies used by the two species. We found that rats integrated temporally related sensory inputs (‘temporal inputs’) from early whisk cycles with spatially related inputs (‘spatial inputs’) to align their whiskers with the objects; their perceptual reports appeared to be based primarily on this spatial alignment. In a similar manner, human subjects also integrated temporal and spatial inputs, but relied mainly on temporal inputs for object localization. These results suggest that during tactile object localization, an iterative motor–sensory process gradually converges on a stable percept of object location in both species. PMID:21969688
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Pugmire, David; Geveci, Berk
The FY18Q1 milestone of the ECP/VTK-m project includes the implementation of a multiblock data set, the completion of a gradients filtering operation, and the release of version 1.1 of the VTK-m software. With the completion of this milestone, the new multiblock data set allows us to iteratively schedule algorithms on composite data structures such as assemblies or hierarchies like AMR. The new gradient algorithms approximate derivatives of fields in 3D structures with finite differences. Finally, the release of VTK-m version 1.1 tags a stable release of the software that can more easily be incorporated into external projects.
Vision-based calibration of parallax barrier displays
NASA Astrophysics Data System (ADS)
Ranieri, Nicola; Gross, Markus
2014-03-01
Static and dynamic parallax barrier displays became very popular over the past years. Especially for single viewer applications like tablets, phones and other hand-held devices, parallax barriers provide a convenient solution to render stereoscopic content. In our work we present a computer vision based calibration approach to relate image layer and barrier layer of parallax barrier displays with unknown display geometry for static or dynamic viewer positions using homographies. We provide the math and methods to compose the required homographies on the fly and present a way to compute the barrier without the need of any iteration. Our GPU implementation is stable and general and can be used to reduce latency and increase refresh rate of existing and upcoming barrier methods.
Dynamics of internal models in game players
NASA Astrophysics Data System (ADS)
Taiji, Makoto; Ikegami, Takashi
1999-10-01
A new approach for the study of social games and communications is proposed. Games are simulated between cognitive players who build the opponent’s internal model and decide their next strategy from predictions based on the model. In this paper, internal models are constructed by the recurrent neural network (RNN), and the iterated prisoner’s dilemma game is performed. The RNN allows us to express the internal model in a geometrical shape. The complicated transients of actions are observed before the stable mutually defecting equilibrium is reached. During the transients, the model shape also becomes complicated and often experiences chaotic changes. These new chaotic dynamics of internal models reflect the dynamical and high-dimensional rugged landscape of the internal model space.
The Effect of Iteration on the Design Performance of Primary School Children
ERIC Educational Resources Information Center
Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.
2015-01-01
Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…
High power tunable mid-infrared optical parametric oscillator enabled by random fiber laser.
Wu, Hanshuo; Wang, Peng; Song, Jiaxin; Ye, Jun; Xu, Jiangming; Li, Xiao; Zhou, Pu
2018-03-05
Random fiber laser, as a kind of novel fiber laser that utilizes random distributed feedback as well as Raman gain, has become a research focus owing to its advantages of wavelength flexibility, modeless property and output stability. Herein, a tunable optical parametric oscillator (OPO) enabled by a random fiber laser is reported for the first time. By exploiting a tunable random fiber laser to pump the OPO, the central wavelength of idler light can be continuously tuned from 3977.34 to 4059.65 nm with stable temporal average output power. The maximal output power achieved is 2.07 W. So far as we know, this is the first demonstration of a continuous-wave tunable OPO pumped by a tunable random fiber laser, which could not only provide a new approach for achieving tunable mid-infrared (MIR) emission, but also extend the application scenarios of random fiber lasers.
Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.
Wei, Qinglai; Liu, Derong; Lin, Hanquan
2016-03-01
In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.
Virtual pyramid wavefront sensor for phase unwrapping.
Akondi, Vyas; Vohnsen, Brian; Marcos, Susana
2016-10-10
Noise affects wavefront reconstruction from wrapped phase data. A novel method of phase unwrapping is proposed with the help of a virtual pyramid wavefront sensor. The method was tested on noisy wrapped phase images obtained experimentally with a digital phase-shifting point diffraction interferometer. The virtuality of the pyramid wavefront sensor allows easy tuning of the pyramid apex angle and modulation amplitude. It is shown that an optimal modulation amplitude obtained by monitoring the Strehl ratio helps in achieving better accuracy. Through simulation studies and iterative estimation, it is shown that the virtual pyramid wavefront sensor is robust to random noise.
Shaikh, Tanvir R; Gao, Haixiao; Baxter, William T; Asturias, Francisco J; Boisset, Nicolas; Leith, Ardean; Frank, Joachim
2009-01-01
This protocol describes the reconstruction of biological molecules from the electron micrographs of single particles. Computation here is performed using the image-processing software SPIDER and can be managed using a graphical user interface, termed the SPIDER Reconstruction Engine. Two approaches are described to obtain an initial reconstruction: random-conical tilt and common lines. Once an existing model is available, reference-based alignment can be used, a procedure that can be iterated. Also described is supervised classification, a method to look for homogeneous subsets when multiple known conformations of the molecule may coexist. PMID:19180078
DNA capture elements for rapid detection and identification of biological agents
NASA Astrophysics Data System (ADS)
Kiel, Johnathan L.; Parker, Jill E.; Holwitt, Eric A.; Vivekananda, Jeeva
2004-08-01
DNA capture elements (DCEs; aptamers) are artificial DNA sequences, from a random pool of sequences, selected for their specific binding to potential biological warfare agents. These sequences were selected by an affinity method using filters to which the target agent was attached and the DNA isolated and amplified by polymerase chain reaction (PCR) in an iterative, increasingly stringent, process. Reporter molecules were attached to the finished sequences. To date, we have made DCEs to Bacillus anthracis spores, Shiga toxin, Venezuelan Equine Encephalitis (VEE) virus, and Francisella tularensis. These DCEs have demonstrated specificity and sensitivity equal to or better than antibody.
2017-01-01
This article studies correlated two-person games constructed from games with independent players as proposed in Iqbal et al. (2016 R. Soc. open sci. 3, 150477. (doi:10.1098/rsos.150477)). The games are played in a collective manner, both in a two-dimensional lattice where the players interact with their neighbours, and with players interacting at random. Four game types are scrutinized in iterated games where the players are allowed to change their strategies, adopting that of their best paid mate neighbour. Particular attention is paid in the study to the effect of a variable degree of correlation on Nash equilibrium strategy pairs. PMID:29291120
High-resolution digital holography with the aid of coherent diffraction imaging.
Jiang, Zhilong; Veetil, Suhas P; Cheng, Jun; Liu, Cheng; Wang, Ling; Zhu, Jianqiang
2015-08-10
The image reconstructed in ordinary digital holography was unable to bring out desired resolution in comparison to photographic materials; thus making it less preferable for many interesting applications. A method is proposed to enhance the resolution of digital holography in all directions by placing a random phase plate between the specimen and the electronic camera and then using an iterative approach to do the reconstruction. With this method, the resolution is improved remarkably in comparison to ordinary digital holography. Theoretical analysis is supported by numerical simulation. The feasibility of the method is also studied experimentally.
Color image encryption based on gyrator transform and Arnold transform
NASA Astrophysics Data System (ADS)
Sui, Liansheng; Gao, Bo
2013-06-01
A color image encryption scheme using gyrator transform and Arnold transform is proposed, which has two security levels. In the first level, the color image is separated into three components: red, green and blue, which are normalized and scrambled using the Arnold transform. The green component is combined with the first random phase mask and transformed to an interim using the gyrator transform. The first random phase mask is generated with the sum of the blue component and a logistic map. Similarly, the red component is combined with the second random phase mask and transformed to three-channel-related data. The second random phase mask is generated with the sum of the phase of the interim and an asymmetrical tent map. In the second level, the three-channel-related data are scrambled again and combined with the third random phase mask generated with the sum of the previous chaotic maps, and then encrypted into a gray scale ciphertext. The encryption result has stationary white noise distribution and camouflage property to some extent. In the process of encryption and decryption, the rotation angle of gyrator transform, the iterative numbers of Arnold transform, the parameters of the chaotic map and generated accompanied phase function serve as encryption keys, and hence enhance the security of the system. Simulation results and security analysis are presented to confirm the security, validity and feasibility of the proposed scheme.
On-board autonomous attitude maneuver planning for planetary spacecraft using genetic algorithms
NASA Technical Reports Server (NTRS)
Kornfeld, Richard P.
2003-01-01
A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This paper presents an approach for attitude path planning that makes full use of a priori constraint knowledge and is computationally tractable enough to be executed on-board a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used 'as is' or as an initial solution to initialize additional deterministic optimization algorithms. A number of example simulations are presented including the case examples of a generic Europa Orbiter spacecraft in cruise as well as in orbit around Europa. The search times are typically on the order of minutes, thus demonstrating the viability of the presented approach. The results are applicable to all future deep space missions where greater spacecraft autonomy is required. In addition, onboard autonomous attitude planning greatly facilitates navigation and science observation planning, benefiting thus all missions to planet Earth as well.
Sugavanam, S; Yan, Z; Kamynin, V; Kurkov, A S; Zhang, L; Churkin, D V
2014-02-10
Multiwavelength lasing in the random distributed feedback fiber laser is demonstrated by employing an all fiber Lyot filter. Stable multiwavelength generation is obtained, with each line exhibiting sub-nanometer line-widths. A flat power distribution over multiple lines is obtained, which indicates that the power between lines is redistributed in nonlinear mixing processes. The multiwavelength generation is observed both in first and second Stokes waves.
NASA Astrophysics Data System (ADS)
Popov, S. M.; Butov, O. V.; Chamorovski, Y. K.; Isaev, V. A.; Mégret, P.; Korobko, D. A.; Zolotovskii, I. O.; Fotiadi, A. A.
2018-06-01
We report on random lasing observed with 100-m-long fiber comprising an array of weak FBGs inscribed in the fiber core and uniformly distributed over the fiber length. Extended fluctuation-free oscilloscope traces highlight power dynamics typical for lasing. An additional piece of Er-doped fiber included into the laser cavity enables a stable laser generation with a linewidth narrower than 10 kHz.
Lash, Timothy L
2007-11-26
The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a qualitative description of study limitations. The latter approach is likely to lead to overconfidence regarding the potential for causal associations, whereas the former safeguards against such overinterpretations. Furthermore, such analyses, once programmed, allow rapid implementation of alternative assignments of probability distributions to the bias parameters, so elevate the plane of discussion regarding study bias from characterizing studies as "valid" or "invalid" to a critical and quantitative discussion of sources of uncertainty.
De Nardis, Camilla; Hendriks, Linda J A; Poirier, Emilie; Arvinte, Tudor; Gros, Piet; Bakker, Alexander B H; de Kruif, John
2017-09-01
Bispecific antibodies combine two different antigen-binding sites in a single molecule, enabling more specific targeting, novel mechanisms of action, and higher clinical efficacies. Although they have the potential to outperform conventional monoclonal antibodies, many bispecific antibodies have issues regarding production, stability, and pharmacokinetic properties. Here, we describe a new approach for generating bispecific antibodies using a common light chain format and exploiting the stable architecture of human immunoglobulin G 1 We used iterative experimental validation and computational modeling to identify multiple Fc variant pairs that drive efficient heterodimerization of the antibody heavy chains. Accelerated stability studies enabled selection of one Fc variant pair dubbed "DEKK" consisting of substitutions L351D and L368E in one heavy chain combined with L351K and T366K in the other. Solving the crystal structure of the DEKK Fc region at a resolution of 2.3 Å enabled detailed analysis of the interactions inducing CH3 interface heterodimerization. Local shifts in the IgG backbone accommodate the introduction of lysine side chains that form stabilizing salt-bridge interactions with substituted and native residues in the opposite chain. Overall, the CH3 domain adapted to these shifts at the interface, yielding a stable Fc conformation very similar to that in wild-type IgG. Using the DEKK format, we generated the bispecific antibody MCLA-128, targeting human EGF receptors 2 and 3. MCLA-128 could be readily produced and purified at industrial scale with a standard mammalian cell culture platform and a routine purification protocol. Long-term accelerated stability assays confirmed that MCLA-128 is highly stable and has excellent biophysical characteristics. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
De Nardis, Camilla; Hendriks, Linda J. A.; Poirier, Emilie; Arvinte, Tudor; Gros, Piet; Bakker, Alexander B. H.; de Kruif, John
2017-01-01
Bispecific antibodies combine two different antigen-binding sites in a single molecule, enabling more specific targeting, novel mechanisms of action, and higher clinical efficacies. Although they have the potential to outperform conventional monoclonal antibodies, many bispecific antibodies have issues regarding production, stability, and pharmacokinetic properties. Here, we describe a new approach for generating bispecific antibodies using a common light chain format and exploiting the stable architecture of human immunoglobulin G1. We used iterative experimental validation and computational modeling to identify multiple Fc variant pairs that drive efficient heterodimerization of the antibody heavy chains. Accelerated stability studies enabled selection of one Fc variant pair dubbed “DEKK” consisting of substitutions L351D and L368E in one heavy chain combined with L351K and T366K in the other. Solving the crystal structure of the DEKK Fc region at a resolution of 2.3 Å enabled detailed analysis of the interactions inducing CH3 interface heterodimerization. Local shifts in the IgG backbone accommodate the introduction of lysine side chains that form stabilizing salt-bridge interactions with substituted and native residues in the opposite chain. Overall, the CH3 domain adapted to these shifts at the interface, yielding a stable Fc conformation very similar to that in wild-type IgG. Using the DEKK format, we generated the bispecific antibody MCLA-128, targeting human EGF receptors 2 and 3. MCLA-128 could be readily produced and purified at industrial scale with a standard mammalian cell culture platform and a routine purification protocol. Long-term accelerated stability assays confirmed that MCLA-128 is highly stable and has excellent biophysical characteristics. PMID:28655766
Final Report on ITER Task Agreement 81-08
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard L. Moore
As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of themore » ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.« less
ITER Construction—Plant System Integration
NASA Astrophysics Data System (ADS)
Tada, E.; Matsuda, S.
2009-02-01
This brief paper introduces how the ITER will be built in the international collaboration. The ITER Organization plays a central role in constructing ITER and leading it into operation. Since most of the ITER components are to be provided in-kind from the member countries, integral project management should be scoped in advance of real work. Those include design, procurement, system assembly, testing, licensing and commissioning of ITER.
Lensless digital holography with diffuse illumination through a pseudo-random phase mask.
Bernet, Stefan; Harm, Walter; Jesacher, Alexander; Ritsch-Marte, Monika
2011-12-05
Microscopic imaging with a setup consisting of a pseudo-random phase mask, and an open CMOS camera, without an imaging objective, is demonstrated. The pseudo random phase mask acts as a diffuser for an incoming laser beam, scattering a speckle pattern to a CMOS chip, which is recorded once as a reference. A sample which is afterwards inserted somewhere in the optical beam path changes the speckle pattern. A single (non-iterative) image processing step, comparing the modified speckle pattern with the previously recorded one, generates a sharp image of the sample. After a first calibration the method works in real-time and allows quantitative imaging of complex (amplitude and phase) samples in an extended three-dimensional volume. Since no lenses are used, the method is free from lens abberations. Compared to standard inline holography the diffuse sample illumination improves the axial sectioning capability by increasing the effective numerical aperture in the illumination path, and it suppresses the undesired so-called twin images. For demonstration, a high resolution spatial light modulator (SLM) is programmed to act as the pseudo-random phase mask. We show experimental results, imaging microscopic biological samples, e.g. insects, within an extended volume at a distance of 15 cm with a transverse and longitudinal resolution of about 60 μm and 400 μm, respectively.
2014-01-01
Background Patients with mixed hyperlipidemia usually are in need of combination therapy to achieve low-density lipoprotein cholesterol (LDL-C) and triglyceride (TG) target values for reduction of cardiovascular risk. This study investigated the efficacy and safety of adding a new hypolipidemic agent, coenzyme A (CoA) to stable statin therapy in patients with mixed hyperlipidemia. Methods In this multi-center, 8-week, double-blind study, adults who had received ≥8 weeks of stable statin therapy and had hypertriglyceridemia (TG level at 2.3-6.5 mmol/L) were randomized to receive CoA 400 U/d or placebo plus stable dosage of statin. Efficacy was assessed by the changes in the levels and patterns of lipoproteins. Tolerability was assessed by the incidence and severity of adverse events (AEs). Results A total of 304 patients with mixed hyperlipidemia were randomized to receive CoA 400 U/d plus statin or placebo plus statin (n = 152, each group). After treatment for 8 weeks, the mean percent change in TG was significantly greater with CoA plus statin compared with placebo plus statin (-25.9% vs -4.9%, respectively; p = 0.0003). CoA plus statin was associated with significant reductions in TC (-9.1% vs -3.1%; p = 0.0033), LDL-C (-9.9% vs 0.1%; p = 0.003), and non- high-density lipoprotein cholesterol (-13.5% vs -5.7%; p = 0.0039). There was no significant difference in the frequency of AEs between groups. No serious AEs were considered treatment related. Conclusions In these adult patients with persistent hypertriglyceridemia, CoA plus statin therapy improved TG and other lipoprotein parameters to a greater extent than statin alone and has no obviously adverse effect. Trial registration Current Controlled Trials ClinicalTrials.gov ID NCT01928342. PMID:24382338
Kardas, Przemyslaw
2007-01-01
Background A randomized, controlled trial was conducted in an outpatient setting to examine the effect of beta-blocker dosing frequency on patient compliance, clinical outcome, and health-related quality of life in patients with stable angina pectoris. Methods One hundred and twelve beta-blockers-naive outpatients with stable angina pectoris were randomized to receive betaxolol, 20 mg once daily or metoprolol tartrate, 50 mg twice daily for 8 weeks. The principal outcome measure was overall compliance measured electronically, whereas secondary outcome measures were drug effectiveness and health-related quality of life. Results The overall compliance was 86.5 ± 21.3% in the betaxolol group versus 76.1 ± 26.3% in the metoprolol group (p < 0.01), and the correct number of doses was taken on 84.4 ± 21.6% and 64.0 ± 31.7% of treatment days, respectively (p < 0.0001). The percentage of missed doses was 14.5 ± 21.5% in the once-daily group and 24.8 ± 26.4% in the twice-daily group (p < 0.01). The percentage of doses taken in the correct time window (58.6% vs 42.0%, p = 0.01), correct interdose intervals (77.4% v 53.1%, p < 0.0001), and therapeutic coverage (85.6% vs 73.7%, p < 0.001) were significantly higher in the once-daily group. Both studied drugs had similar antianginal effectiveness. Health-related quality of life improved in both groups, but this increase was more pronounced in the betaxolol arm in some dimensions. Conclusions The study demonstrates that patient compliance with once-daily betaxolol is significantly better than with twice daily metoprolol. Similarly, this treatment provides better quality of life. These results demonstrate possible therapeutic advantages of once-daily over twice-daily beta-blockers in the treatment of stable angina pectoris. PMID:17580734