Sample records for iterative optimization process

  1. Status of the ITER Cryodistribution

    NASA Astrophysics Data System (ADS)

    Chang, H.-S.; Vaghela, H.; Patel, P.; Rizzato, A.; Cursan, M.; Henry, D.; Forgeas, A.; Grillot, D.; Sarkar, B.; Muralidhara, S.; Das, J.; Shukla, V.; Adler, E.

    2017-12-01

    Since the conceptual design of the ITER Cryodistribution many modifications have been applied due to both system optimization and improved knowledge of the clients’ requirements. Process optimizations in the Cryoplant resulted in component simplifications whereas increased heat load in some of the superconducting magnet systems required more complicated process configuration but also the removal of a cold box was possible due to component arrangement standardization. Another cold box, planned for redundancy, has been removed due to the Tokamak in-Cryostat piping layout modification. In this proceeding we will summarize the present design status and component configuration of the ITER Cryodistribution with all changes implemented which aim at process optimization and simplification as well as operational reliability, stability and flexibility.

  2. The Effect of Iteration on the Design Performance of Primary School Children

    ERIC Educational Resources Information Center

    Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.

    2015-01-01

    Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…

  3. Learning Efficient Sparse and Low Rank Models.

    PubMed

    Sprechmann, P; Bronstein, A M; Sapiro, G

    2015-09-01

    Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.

  4. Single-agent parallel window search

    NASA Technical Reports Server (NTRS)

    Powley, Curt; Korf, Richard E.

    1991-01-01

    Parallel window search is applied to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A(asterisk) (IDA-asterisk) on the same problem but with different cost thresholds. This approach is limited by the time to perform the goal iteration. To overcome this disadvantage, the authors consider node ordering. They discuss how global node ordering by minimum h among nodes with equal f = g + h values can reduce the time complexity of serial IDA-asterisk by reducing the time to perform the iterations prior to the goal iteration. Finally, the two ideas of parallel window search and node ordering are combined to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, called simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.

  5. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  6. Multidisciplinary systems optimization by linear decomposition

    NASA Technical Reports Server (NTRS)

    Sobieski, J.

    1984-01-01

    In a typical design process major decisions are made sequentially. An illustrated example is given for an aircraft design in which the aerodynamic shape is usually decided first, then the airframe is sized for strength and so forth. An analogous sequence could be laid out for any other major industrial product, for instance, a ship. The loops in the discipline boxes symbolize iterative design improvements carried out within the confines of a single engineering discipline, or subsystem. The loops spanning several boxes depict multidisciplinary design improvement iterations. Omitted for graphical simplicity is parallelism of the disciplinary subtasks. The parallelism is important in order to develop a broad workfront necessary to shorten the design time. If all the intradisciplinary and interdisciplinary iterations were carried out to convergence, the process could yield a numerically optimal design. However, it usually stops short of that because of time and money limitations. This is especially true for the interdisciplinary iterations.

  7. Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Wang, Pei; Lü, Jinhu

    2017-01-01

    Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.

  8. Optimal spiral phase modulation in Gerchberg-Saxton algorithm for wavefront reconstruction and correction

    NASA Astrophysics Data System (ADS)

    Baránek, M.; Běhal, J.; Bouchal, Z.

    2018-01-01

    In the phase retrieval applications, the Gerchberg-Saxton (GS) algorithm is widely used for the simplicity of implementation. This iterative process can advantageously be deployed in the combination with a spatial light modulator (SLM) enabling simultaneous correction of optical aberrations. As recently demonstrated, the accuracy and efficiency of the aberration correction using the GS algorithm can be significantly enhanced by a vortex image spot used as the target intensity pattern in the iterative process. Here we present an optimization of the spiral phase modulation incorporated into the GS algorithm.

  9. Optimization Control of the Color-Coating Production Process for Model Uncertainty

    PubMed Central

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563

  10. Optimization Control of the Color-Coating Production Process for Model Uncertainty.

    PubMed

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.

  11. Optimization applications in aircraft engine design and test

    NASA Technical Reports Server (NTRS)

    Pratt, T. K.

    1984-01-01

    Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.

  12. Pseudo-time methods for constrained optimization problems governed by PDE

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1995-01-01

    In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.

  13. Noise tolerant illumination optimization applied to display devices

    NASA Astrophysics Data System (ADS)

    Cassarly, William J.; Irving, Bruce

    2005-02-01

    Display devices have historically been designed through an iterative process using numerous hardware prototypes. This process is effective but the number of iterations is limited by the time and cost to make the prototypes. In recent years, virtual prototyping using illumination software modeling tools has replaced many of the hardware prototypes. Typically, the designer specifies the design parameters, builds the software model, predicts the performance using a Monte Carlo simulation, and uses the performance results to repeat this process until an acceptable design is obtained. What is highly desired, and now possible, is to use illumination optimization to automate the design process. Illumination optimization provides the ability to explore a wider range of design options while also providing improved performance. Since Monte Carlo simulations are often used to calculate the system performance but those predictions have statistical uncertainty, the use of noise tolerant optimization algorithms is important. The use of noise tolerant illumination optimization is demonstrated by considering display device designs that extract light using 2D paint patterns as well as 3D textured surfaces. A hybrid optimization approach that combines a mesh feedback optimization with a classical optimizer is demonstrated. Displays with LED sources and cold cathode fluorescent lamps are considered.

  14. Nested Krylov methods and preserving the orthogonality

    NASA Technical Reports Server (NTRS)

    Desturler, Eric; Fokkema, Diederik R.

    1993-01-01

    Recently the GMRESR inner-outer iteraction scheme for the solution of linear systems of equations was proposed by Van der Vorst and Vuik. Similar methods have been proposed by Axelsson and Vassilevski and Saad (FGMRES). The outer iteration is GCR, which minimizes the residual over a given set of direction vectors. The inner iteration is GMRES, which at each step computes a new direction vector by approximately solving the residual equation. However, the optimality of the approximation over the space of outer search directions is ignored in the inner GMRES iteration. This leads to suboptimal corrections to the solution in the outer iteration, as components of the outer iteration directions may reenter in the inner iteration process. Therefore we propose to preserve the orthogonality relations of GCR in the inner GMRES iteration. This gives optimal corrections; however, it involves working with a singular, non-symmetric operator. We will discuss some important properties, and we will show by experiments that, in terms of matrix vector products, this modification (almost) always leads to better convergence. However, because we do more orthogonalizations, it does not always give an improved performance in CPU-time. Furthermore, we will discuss efficient implementations as well as the truncation possibilities of the outer GCR process. The experimental results indicate that for such methods it is advantageous to preserve the orthogonality in the inner iteration. Of course we can also use iteration schemes other than GMRES as the inner method; methods with short recurrences like GICGSTAB are of interest.

  15. An iterative sinogram gap-filling method with object- and scanner-dedicated discrete cosine transform (DCT)-domain filters for high resolution PET scanners.

    PubMed

    Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon

    2018-01-01

    We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.

  16. Automated IMRT planning with regional optimization using planning scripts

    PubMed Central

    Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff Z.

    2013-01-01

    Intensity‐modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time‐consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases. PACS numbers: 87.55.D, 87.55.de PMID:23318393

  17. Emerging Techniques for Dose Optimization in Abdominal CT

    PubMed Central

    Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit

    2014-01-01

    Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277

  18. A novel surrogate-based approach for optimal design of electromagnetic-based circuits

    NASA Astrophysics Data System (ADS)

    Hassan, Abdel-Karim S. O.; Mohamed, Ahmed S. A.; Rabie, Azza A.; Etman, Ahmed S.

    2016-02-01

    A new geometric design centring approach for optimal design of central processing unit-intensive electromagnetic (EM)-based circuits is introduced. The approach uses norms related to the probability distribution of the circuit parameters to find distances from a point to the feasible region boundaries by solving nonlinear optimization problems. Based on these normed distances, the design centring problem is formulated as a max-min optimization problem. A convergent iterative boundary search technique is exploited to find the normed distances. To alleviate the computation cost associated with the EM-based circuits design cycle, space-mapping (SM) surrogates are used to create a sequence of iteratively updated feasible region approximations. In each SM feasible region approximation, the centring process using normed distances is implemented, leading to a better centre point. The process is repeated until a final design centre is attained. Practical examples are given to show the effectiveness of the new design centring method for EM-based circuits.

  19. Conjecture Mapping to Optimize the Educational Design Research Process

    ERIC Educational Resources Information Center

    Wozniak, Helen

    2015-01-01

    While educational design research promotes closer links between practice and theory, reporting its outcomes from iterations across multiple contexts is often constrained by the volumes of data generated, and the context bound nature of the research outcomes. Reports tend to focus on a single iteration of implementation without further research to…

  20. A fast method to emulate an iterative POCS image reconstruction algorithm.

    PubMed

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  1. Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method

    NASA Astrophysics Data System (ADS)

    Chen, Xiaomin; Wang, Gang

    2017-05-01

    The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.

  2. Application of a neural network to simulate analysis in an optimization process

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Lamarsh, William J., II

    1992-01-01

    A new experimental software package called NETS/PROSSS aimed at reducing the computing time required to solve a complex design problem is described. The software combines a neural network for simulating the analysis program with an optimization program. The neural network is applied to approximate results of a finite element analysis program to quickly obtain a near-optimal solution. Results of the NETS/PROSSS optimization process can also be used as an initial design in a normal optimization process and make it possible to converge to an optimum solution with significantly fewer iterations.

  3. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  4. SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, P; Mao, T; Gong, S

    2016-06-15

    Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  5. Optimal tracking control for a class of nonlinear discrete-time systems with time delays based on heuristic dynamic programming.

    PubMed

    Zhang, Huaguang; Song, Ruizhuo; Wei, Qinglai; Zhang, Tieyan

    2011-12-01

    In this paper, a novel heuristic dynamic programming (HDP) iteration algorithm is proposed to solve the optimal tracking control problem for a class of nonlinear discrete-time systems with time delays. The novel algorithm contains state updating, control policy iteration, and performance index iteration. To get the optimal states, the states are also updated. Furthermore, the "backward iteration" is applied to state updating. Two neural networks are used to approximate the performance index function and compute the optimal control policy for facilitating the implementation of HDP iteration algorithm. At last, we present two examples to demonstrate the effectiveness of the proposed HDP iteration algorithm.

  6. Swarm size and iteration number effects to the performance of PSO algorithm in RFID tag coverage optimization

    NASA Astrophysics Data System (ADS)

    Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah

    2017-04-01

    Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.

  7. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  8. Program Aids Analysis And Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1994-01-01

    NETS/ PROSSS (NETS Coupled With Programming System for Structural Synthesis) computer program developed to provide system for combining NETS (MSC-21588), neural-network application program and CONMIN (Constrained Function Minimization, ARC-10836), optimization program. Enables user to reach nearly optimal design. Design then used as starting point in normal optimization process, possibly enabling user to converge to optimal solution in significantly fewer iterations. NEWT/PROSSS written in C language and FORTRAN 77.

  9. Composition of web services using Markov decision processes and dynamic programming.

    PubMed

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.

  10. Engineering the on-axis intensity of Bessel beam by a feedback tuning loop

    NASA Astrophysics Data System (ADS)

    Li, Runze; Yu, Xianghua; Yang, Yanlong; Peng, Tong; Yao, Baoli; Zhang, Chunmin; Ye, Tong

    2018-02-01

    The Bessel beam belongs to a typical class of non-diffractive optical fields that are characterized by their invariant focal profiles along the propagation direction. However, ideal Bessel beams only rigorously exist in theory; Bessel beams generated in the lab are quasi-Bessel beams with finite focal extensions and varying intensity profiles along the propagation axis. The ability to engineer the on-axis intensity profile to the desired shape is essential for many applications. Here we demonstrate an iterative optimization-based approach to engineering the on-axis intensity of Bessel beams. The genetic algorithm is used to demonstrate this approach. Starting with a traditional axicon phase mask, in the design process, the computed on-axis beam profile is fed into a feedback tuning loop of an iterative optimization process, which searches for an optimal radial phase distribution that can generate a generalized Bessel beam with the desired onaxis intensity profile. The experimental implementation involves a fine-tuning process that adjusts the originally targeted profile so that the optimization process can optimize the phase mask to yield an improved on-axis profile. Our proposed method has been demonstrated in engineering several zeroth-order Bessel beams with customized on-axis profiles. High accuracy and high energy throughput merit its use in many applications.

  11. Space shuttle propulsion parameter estimation using optimal estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.

  12. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, and produce modified schedules quickly. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. These experiments were performed within the domain of Space Shuttle ground processing.

  13. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Hanquan

    2016-03-01

    In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.

  14. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  15. Discrete-Time Local Value Iteration Adaptive Dynamic Programming: Admissibility and Termination Analysis.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao

    In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.

  16. Formulation for Simultaneous Aerodynamic Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.

    1993-01-01

    An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.

  17. Conceptual design and structural analysis for an 8.4-m telescope

    NASA Astrophysics Data System (ADS)

    Mendoza, Manuel; Farah, Alejandro; Ruiz Schneider, Elfego

    2004-09-01

    This paper describes the conceptual design of the optics support structures of a telescope with a primary mirror of 8.4 m, the same size as a Large Binocular Telescope (LBT) primary mirror. The design goal is to achieve a structure for supporting the primary and secondary mirrors and keeping them joined as rigid as possible. With this purpose an optimization with several models was done. This iterative design process includes: specifications development, concepts generation and evaluation. Process included Finite Element Analysis (FEA) as well as other analytical calculations. Quality Function Deployment (QFD) matrix was used to obtain telescope tube and spider specifications. Eight spiders and eleven tubes geometric concepts were proposed. They were compared in decision matrixes using performance indicators and parameters. Tubes and spiders went under an iterative optimization process. The best tubes and spiders concepts were assembled together. All assemblies were compared and ranked according to their performance.

  18. Composition of Web Services Using Markov Decision Processes and Dynamic Programming

    PubMed Central

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247

  19. Automatic treatment plan re-optimization for adaptive radiotherapy guided with the initial plan DVHs.

    PubMed

    Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Graves, Yan Jiang; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve

    2013-12-21

    Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine.

  20. Intelligent model-based OPC

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.

    2006-03-01

    Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm, which is an artificially intelligent optimization method with a high probability to obtain global optimization. From preliminary results, the required iterations were reduced from 5 to 2 for a simple dumbbell-shape layout.

  1. Multipoint Optimal Minimum Entropy Deconvolution and Convolution Fix: Application to vibration fault detection

    NASA Astrophysics Data System (ADS)

    McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.

  2. Optimization of the propulsion for multistage solid rocket motor launchers

    NASA Astrophysics Data System (ADS)

    Calabro, M.; Dufour, A.; Macaire, A.

    2002-02-01

    Some tools focused on a rapid multidisciplinary optimization capability for multistage launch vehicle design were developed at EADS-LV. These tools may be broken down into two categories, those related to propulsion design optimization and a computer code devoted to trajectories and under constraints optimization. Both are linked in order to obtain optimal vehicle design after an iterative process. After a description of the two categories tools, an example of application is given on a small space launcher.

  3. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part III

    NASA Technical Reports Server (NTRS)

    Barnes, Bruce W.; Sessions, Alaric M.; Beyon, Jeffrey; Petway, Larry B.

    2014-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. The existing power system was analyzed to rank components in terms of inefficiency, power dissipation, footprint and mass. Design considerations and priorities are compared along with the results of each design iteration. Overall power system improvements are summarized for design implementations.

  4. Models of expert assessments and their study in problems of choice and decision-making in management of motor transport processes

    NASA Astrophysics Data System (ADS)

    Belokurov, V. P.; Belokurov, S. V.; Korablev, R. A.; Shtepa, A. A.

    2018-05-01

    The article deals with decision making concerning transport tasks on search iterations in the management of motor transport processes. An optimal selection of the best option for specific situations is suggested in the management of complex multi-criteria transport processes.

  5. Efficient Geometry Minimization and Transition Structure Optimization Using Interpolated Potential Energy Surfaces and Iteratively Updated Hessians.

    PubMed

    Zheng, Jingjing; Frisch, Michael J

    2017-12-12

    An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.

  6. Incorporating Multi-criteria Optimization and Uncertainty Analysis in the Model-Based Systems Engineering of an Autonomous Surface Craft

    DTIC Science & Technology

    2009-09-01

    SAS Statistical Analysis Software SE Systems Engineering SEP Systems Engineering Process SHP Shaft Horsepower SIGINT Signals Intelligence......management occurs (OSD 2002). The Systems Engineering Process (SEP), displayed in Figure 2, is a comprehensive , iterative and recursive problem

  7. Simultaneous optimization of micro-heliostat geometry and field layout using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lazardjani, Mani Yousefpour; Kronhardt, Valentina; Dikta, Gerhard; Göttsche, Joachim

    2016-05-01

    A new optimization tool for micro-heliostat (MH) geometry and field layout is presented. The method intends simultaneous performance improvement and cost reduction through iteration of heliostat geometry and field layout parameters. This tool was developed primarily for the optimization of a novel micro-heliostat concept, which was developed at Solar-Institut Jülich (SIJ). However, the underlying approach for the optimization can be used for any heliostat type. During the optimization the performance is calculated using the ray-tracing tool SolCal. The costs of the heliostats are calculated by use of a detailed cost function. A genetic algorithm is used to change heliostat geometry and field layout in an iterative process. Starting from an initial setup, the optimization tool generates several configurations of heliostat geometries and field layouts. For each configuration a cost-performance ratio is calculated. Based on that, the best geometry and field layout can be selected in each optimization step. In order to find the best configuration, this step is repeated until no significant improvement in the results is observed.

  8. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, produce modified schedules, quickly, and exhibits 'anytime' behavior. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. We also show the anytime characteristics of the system. These experiments were performed within the domain of Space Shuttle ground processing.

  9. JWST Wavefront Control Toolbox

    NASA Technical Reports Server (NTRS)

    Shin, Shahram Ron; Aronstein, David L.

    2011-01-01

    A Matlab-based toolbox has been developed for the wavefront control and optimization of segmented optical surfaces to correct for possible misalignments of James Webb Space Telescope (JWST) using influence functions. The toolbox employs both iterative and non-iterative methods to converge to an optimal solution by minimizing the cost function. The toolbox could be used in either of constrained and unconstrained optimizations. The control process involves 1 to 7 degrees-of-freedom perturbations per segment of primary mirror in addition to the 5 degrees of freedom of secondary mirror. The toolbox consists of a series of Matlab/Simulink functions and modules, developed based on a "wrapper" approach, that handles the interface and data flow between existing commercial optical modeling software packages such as Zemax and Code V. The limitations of the algorithm are dictated by the constraints of the moving parts in the mirrors.

  10. Finding the Optimal Guidance for Enhancing Anchored Instruction

    ERIC Educational Resources Information Center

    Zydney, Janet Mannheimer; Bathke, Arne; Hasselbring, Ted S.

    2014-01-01

    This study investigated the effect of different methods of guidance with anchored instruction on students' mathematical problem-solving performance. The purpose of this research was to iteratively design a learning environment to find the optimal level of guidance. Two iterations of the software were compared. The first iteration used explicit…

  11. Evolutionary engineering for industrial microbiology.

    PubMed

    Vanee, Niti; Fisher, Adam B; Fong, Stephen S

    2012-01-01

    Superficially, evolutionary engineering is a paradoxical field that balances competing interests. In natural settings, evolution iteratively selects and enriches subpopulations that are best adapted to a particular ecological niche using random processes such as genetic mutation. In engineering desired approaches utilize rational prospective design to address targeted problems. When considering details of evolutionary and engineering processes, more commonality can be found. Engineering relies on detailed knowledge of the problem parameters and design properties in order to predict design outcomes that would be an optimized solution. When detailed knowledge of a system is lacking, engineers often employ algorithmic search strategies to identify empirical solutions. Evolution epitomizes this iterative optimization by continuously diversifying design options from a parental design, and then selecting the progeny designs that represent satisfactory solutions. In this chapter, the technique of applying the natural principles of evolution to engineer microbes for industrial applications is discussed to highlight the challenges and principles of evolutionary engineering.

  12. Guidelines for internal optics optimization of the ITER EC H and CD upper launcher

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moro, A.; Bruschi, A.; Figini, L.

    2014-02-12

    The importance of localized injection of Electron Cyclotron waves to control Magneto-HydroDynamic instability is well assessed in tokamak physics and the set of four Electron Cyclotron (EC) Upper Launchers (UL) in ITER is mainly designed for this purpose. Each of the 4 ULs uses quasi-optical mirrors (shaping and planes, fixed and steerable) to redirect and focus 8 beams (in two rows, with power close to 1 MW per beam coming from the EC transmission lines) in the plasma region where the instability appears. Small beam dimensions and maximum beam superposition guarantee the necessary localization of the driven current. To achievemore » the goal of MHD stabilization with minimum EC power to preserve the energy confinement in the outer half of the plasma cross section, optimization of the quasi-optical design is required and a guideline of a strategy is presented. As a result of this process and following the guidelines indicated, modifications of the design (new mirrors positions, rotation axes and/or focal properties) will be proposed for the next step of an iterative process, including the mandatory compatibility check with the mechanical constraints.« less

  13. Iterative optimizing quantization method for reconstructing three-dimensional images from a limited number of views

    DOEpatents

    Lee, Heung-Rae

    1997-01-01

    A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object.

  14. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  15. Acceleration of GPU-based Krylov solvers via data transfer reduction

    DOE PAGES

    Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...

    2015-04-08

    Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less

  16. Micromagnetic Simulation of Thermal Effects in Magnetic Nanostructures

    DTIC Science & Technology

    2003-01-01

    NiFe magnetic nano- elements are calculated. INTRODUCTION With decreasing size of magnetic nanostructures thermal effects become increasingly important...thermal field. The thermal field is assumed to be a Gaussian random process with the following statistical properties : (H,,,(t))=0 and (H,I.(t),H,.1(t...following property DI " =VE(M’’) - [VE(M"’)• t] t =0, for k =1.m (12) 186 The optimal path can be found using an iterative scheme. In each iteration step the

  17. Multigrid-based reconstruction algorithm for quantitative photoacoustic tomography

    PubMed Central

    Li, Shengfu; Montcel, Bruno; Yuan, Zhen; Liu, Wanyu; Vray, Didier

    2015-01-01

    This paper proposes a multigrid inversion framework for quantitative photoacoustic tomography reconstruction. The forward model of optical fluence distribution and the inverse problem are solved at multiple resolutions. A fixed-point iteration scheme is formulated for each resolution and used as a cost function. The simulated and experimental results for quantitative photoacoustic tomography reconstruction show that the proposed multigrid inversion can dramatically reduce the required number of iterations for the optimization process without loss of reliability in the results. PMID:26203371

  18. New perspectives in face correlation: discrimination enhancement in face recognition based on iterative algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Alfalou, A.; Brosseau, C.

    2016-04-01

    Here, we report a brief review on the recent developments of correlation algorithms. Several implementation schemes and specific applications proposed in recent years are also given to illustrate powerful applications of these methods. Following a discussion and comparison of the implementation of these schemes, we believe that all-numerical implementation is the most practical choice for application of the correlation method because the advantages of optical processing cannot compensate the technical and/or financial cost needed for an optical implementation platform. We also present a simple iterative algorithm to optimize the training images of composite correlation filters. By making use of three or four iterations, the peak-to-correlation energy (PCE) value of correlation plane can be significantly enhanced. A simulation test using the Pointing Head Pose Image Database (PHPID) illustrates the effectiveness of this statement. Our method can be applied in many composite filters based on linear composition of training images as an optimization means.

  19. Scalable splitting algorithms for big-data interferometric imaging in the SKA era

    NASA Astrophysics Data System (ADS)

    Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves

    2016-11-01

    In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.

  20. 2-D Ultrasound Sparse Arrays Multidepth Radiation Optimization Using Simulated Annealing and Spiral-Array Inspired Energy Functions.

    PubMed

    Roux, Emmanuel; Ramalli, Alessandro; Tortoli, Piero; Cachard, Christian; Robini, Marc C; Liebgott, Herve

    2016-12-01

    Full matrix arrays are excellent tools for 3-D ultrasound imaging, but the required number of active elements is too high to be individually controlled by an equal number of scanner channels. The number of active elements is significantly reduced by the sparse array techniques, but the position of the remaining elements must be carefully optimized. This issue is faced here by introducing novel energy functions in the simulated annealing (SA) algorithm. At each iteration step of the optimization process, one element is freely translated and the associated radiated pattern is simulated. To control the pressure field behavior at multiple depths, three energy functions inspired by the pressure field radiated by a Blackman-tapered spiral array are introduced. Such energy functions aim at limiting the main lobe width while lowering the side lobe and grating lobe levels at multiple depths. Numerical optimization results illustrate the influence of the number of iterations, pressure measurement points, and depths, as well as the influence of the energy function definition on the optimized layout. It is also shown that performance close to or even better than the one provided by a spiral array, here assumed as reference, may be obtained. The finite-time convergence properties of SA allow the duration of the optimization process to be set in advance.

  1. A superlinear interior points algorithm for engineering design optimization

    NASA Technical Reports Server (NTRS)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  2. Automatic treatment plan re-optimization for adaptive radiotherapy guided with the initial plan DVHs

    NASA Astrophysics Data System (ADS)

    Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Jiang Graves, Yan; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve

    2013-12-01

    Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine. This work was originally presented at the 54th AAPM annual meeting in Charlotte, NC, July 29-August 2, 2012.

  3. No-go theorem for iterations of unknown quantum gates

    NASA Astrophysics Data System (ADS)

    Soleimanifar, Mehdi; Karimipour, Vahid

    2016-01-01

    We propose a no-go theorem by proving the impossibility of constructing a deterministic quantum circuit that iterates a unitary oracle by calling it only once. Different schemes are provided to bypass this result and to approximately realize the iteration. The optimal scheme is also studied. An interesting observation is that for a large number of iterations, a trivial strategy like using the identity channel has the optimal performance, and preprocessing, postprocessing, or using resources like entanglement does not help at all. Intriguingly, the number of iterations, when being large enough, does not affect the performance of the proposed schemes.

  4. Fast machine-learning online optimization of ultra-cold-atom experiments.

    PubMed

    Wigley, P B; Everitt, P J; van den Hengel, A; Bastian, J W; Sooriyabandara, M A; McDonald, G D; Hardman, K S; Quinlivan, C D; Manju, P; Kuhn, C C N; Petersen, I R; Luiten, A N; Hope, J J; Robins, N P; Hush, M R

    2016-05-16

    We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our 'learner' discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system.

  5. Fast machine-learning online optimization of ultra-cold-atom experiments

    PubMed Central

    Wigley, P. B.; Everitt, P. J.; van den Hengel, A.; Bastian, J. W.; Sooriyabandara, M. A.; McDonald, G. D.; Hardman, K. S.; Quinlivan, C. D.; Manju, P.; Kuhn, C. C. N.; Petersen, I. R.; Luiten, A. N.; Hope, J. J.; Robins, N. P.; Hush, M. R.

    2016-01-01

    We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our ‘learner’ discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system. PMID:27180805

  6. MO-B-BRB-01: Optimize Treatment Planning Process in Clinical Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, W.

    The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less

  7. MO-B-BRB-00: Optimizing the Treatment Planning Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less

  8. MO-B-BRB-03: Systems Engineering Tools for Treatment Planning Process Optimization in Radiation Medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapur, A.

    The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less

  9. New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program

    NASA Technical Reports Server (NTRS)

    Strain, D.; Levy, R.

    1986-01-01

    The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.

  10. Precise and fast spatial-frequency analysis using the iterative local Fourier transform.

    PubMed

    Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook

    2016-09-19

    The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.

  11. On iterative processes in the Krylov-Sonneveld subspaces

    NASA Astrophysics Data System (ADS)

    Ilin, Valery P.

    2016-10-01

    The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.

  12. Approximate optimal guidance for the advanced launch system

    NASA Technical Reports Server (NTRS)

    Feeley, T. S.; Speyer, J. L.

    1993-01-01

    A real-time guidance scheme for the problem of maximizing the payload into orbit subject to the equations of motion for a rocket over a spherical, non-rotating earth is presented. An approximate optimal launch guidance law is developed based upon an asymptotic expansion of the Hamilton - Jacobi - Bellman or dynamic programming equation. The expansion is performed in terms of a small parameter, which is used to separate the dynamics of the problem into primary and perturbation dynamics. For the zeroth-order problem the small parameter is set to zero and a closed-form solution to the zeroth-order expansion term of Hamilton - Jacobi - Bellman equation is obtained. Higher-order terms of the expansion include the effects of the neglected perturbation dynamics. These higher-order terms are determined from the solution of first-order linear partial differential equations requiring only the evaluation of quadratures. This technique is preferred as a real-time, on-line guidance scheme to alternative numerical iterative optimization schemes because of the unreliable convergence properties of these iterative guidance schemes and because the quadratures needed for the approximate optimal guidance law can be performed rapidly and by parallel processing. Even if the approximate solution is not nearly optimal, when using this technique the zeroth-order solution always provides a path which satisfies the terminal constraints. Results for two-degree-of-freedom simulations are presented for the simplified problem of flight in the equatorial plane and compared to the guidance scheme generated by the shooting method which is an iterative second-order technique.

  13. Design intent optimization at the beyond 7nm node: the intersection of DTCO and EUVL stochastic mitigation techniques

    NASA Astrophysics Data System (ADS)

    Crouse, Michael; Liebmann, Lars; Plachecki, Vince; Salama, Mohamed; Chen, Yulu; Saulnier, Nicole; Dunn, Derren; Matthew, Itty; Hsu, Stephen; Gronlund, Keith; Goodwin, Francis

    2017-03-01

    The initial readiness of EUV patterning was demonstrated in 2016 with IBM Alliance's 7nm device technology. The focus has now shifted to driving the 'effective' k1 factor and enabling the second generation of EUV patterning. Thus, Design Technology Co-optimization (DTCO) has become a critical part of technology enablement as scaling has become more challenging and the industry pushes the limits of EUV lithography. The working partnership between the design teams and the process development teams typically involves an iterative approach to evaluate the manufacturability of proposed designs, subsequent modifications to those designs and finally a design manual for the technology. While this approach has served the industry well for many generations, the challenges at the Beyond 7nm node require a more efficient approach. In this work, we describe the use of "Design Intent" lithographic layout optimization where we remove the iterative component of DTCO and replace it with an optimization that achieves both a "patterning friendly" design and minimizes the well-known EUV stochastic effects. Solved together, this "design intent" approach can more quickly achieve superior lithographic results while still meeting the original device's functional specifications. Specifically, in this work we will demonstrate "design intent" optimization for critical BEOL layers using design tolerance bands to guide the source mask co-optimization. The design tolerance bands can be either supplied as part of the original design or derived from some basic rules. Additionally, the EUV stochastic behavior is mitigated by enhancing the image log slope (ILS) for specific key features as part of the overall optimization. We will show the benefit of the "design intent approach" on both bidirectional and unidirectional 28nm min pitch standard logic layouts and compare the more typical iterative SMO approach. Thus demonstrating the benefit of allowing the design to float within the specified range. Lastly, we discuss how the evolution of this approach could lead to layout optimization based entirely on some minimal set of functional requirements and process constraints.

  14. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    PubMed

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.

  15. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.

  16. Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Agte, Jeremy S.; Sandusky, Robert R., Jr.

    1998-01-01

    BLISS is a method for optimization of engineering systems by decomposition. It separates the system level optimization, having a relatively small number of design variables, from the potentially numerous subsystem optimizations that may each have a large number of local design variables. The subsystem optimizations are autonomous and may be conducted concurrently. Subsystem and system optimizations alternate, linked by sensitivity data, producing a design improvement in each iteration. Starting from a best guess initial design, the method improves that design in iterative cycles, each cycle comprised of two steps. In step one, the system level variables are frozen and the improvement is achieved by separate, concurrent, and autonomous optimizations in the local variable subdomains. In step two, further improvement is sought in the space of the system level variables. Optimum sensitivity data link the second step to the first. The method prototype was implemented using MATLAB and iSIGHT programming software and tested on a simplified, conceptual level supersonic business jet design, and a detailed design of an electronic device. Satisfactory convergence and favorable agreement with the benchmark results were observed. Modularity of the method is intended to fit the human organization and map well on the computing technology of concurrent processing.

  17. An improved 2D MoF method by using high order derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2017-11-01

    The MoF (Moment of Fluid) method is one of the most accurate approaches among various interface reconstruction algorithms. Alike other second order methods, the MoF method needs to solve an implicit optimization problem to obtain the optimal approximate interface, so an iteration process is inevitable under most circumstances. In order to solve the optimization efficiently, the properties of the objective function are worthy of studying. In 2D problems, the first order derivative has been deduced and applied in the previous researches. In this paper, the high order derivatives of the objective function are deduced on the convex polygon. We show that the nth (n ≥ 2) order derivatives are discontinuous, and the number of the discontinuous points is two times the number of the polygon edge. A rotation algorithm is proposed to successively calculate these discontinuous points, thus the target interval where the optimal solution is located can be determined. Since the high order derivatives of the objective function are continuous in the target interval, the iteration schemes based on high order derivatives can be used to improve the convergence rate. Moreover, when iterating in the target interval, the value of objective function and its derivatives can be directly updated without explicitly solving the volume conservation equation. The direct update makes a further improvement of the efficiency especially when the number of edges of the polygon is increasing. The Halley's method, which is based on the first three order derivatives, is applied as the iteration scheme in this paper and the numerical results indicate that the CPU time is about half of the previous method on the quadrilateral cell and is about one sixth on the decagon cell.

  18. String-averaging incremental subgradients for constrained convex optimization with applications to reconstruction of tomographic images

    NASA Astrophysics Data System (ADS)

    Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo

    2016-11-01

    We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.

  19. Iterative optimizing quantization method for reconstructing three-dimensional images from a limited number of views

    DOEpatents

    Lee, H.R.

    1997-11-18

    A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object. 5 figs.

  20. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  1. Self-consistent hybrid functionals for solids: a fully-automated implementation

    NASA Astrophysics Data System (ADS)

    Erba, A.

    2017-08-01

    A fully-automated algorithm for the determination of the system-specific optimal fraction of exact exchange in self-consistent hybrid functionals of the density-functional-theory is illustrated, as implemented into the public Crystal program. The exchange fraction of this new class of functionals is self-consistently updated proportionally to the inverse of the dielectric response of the system within an iterative procedure (Skone et al 2014 Phys. Rev. B 89, 195112). Each iteration of the present scheme, in turn, implies convergence of a self-consistent-field (SCF) and a coupled-perturbed-Hartree-Fock/Kohn-Sham (CPHF/KS) procedure. The present implementation, beside improving the user-friendliness of self-consistent hybrids, exploits the unperturbed and electric-field perturbed density matrices from previous iterations as guesses for subsequent SCF and CPHF/KS iterations, which is documented to reduce the overall computational cost of the whole process by a factor of 2.

  2. Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems.

    PubMed

    Liu, Derong; Li, Hongliang; Wang, Ding

    2015-06-01

    In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms.

  3. Inductive flux usage and its optimization in tokamak operation

    DOE PAGES

    Luce, Timothy C.; Humphreys, David A.; Jackson, Gary L.; ...

    2014-07-30

    The energy flow from the poloidal field coils of a tokamak to the electromagnetic and kinetic stored energy of the plasma are considered in the context of optimizing the operation of ITER. The goal is to optimize the flux usage in order to allow the longest possible burn in ITER at the desired conditions to meet the physics objectives (500 MW fusion power with energy gain of 10). A mathematical formulation of the energy flow is derived and applied to experiments in the DIII-D tokamak that simulate the ITER design shape and relevant normalized current and pressure. The rate ofmore » rise of the plasma current was varied, and the fastest stable current rise is found to be the optimum for flux usage in DIII-D. A method to project the results to ITER is formulated. The constraints of the ITER poloidal field coil set yield an optimum at ramp rates slower than the maximum stable rate for plasmas similar to the DIII-D plasmas. Finally, experiments in present-day tokamaks for further optimization of the current rise and validation of the projections are suggested.« less

  4. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  5. Optimization of image quality and acquisition time for lab-based X-ray microtomography using an iterative reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko

    2018-05-01

    Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.

  6. Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.

    PubMed

    Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S

    2017-01-01

    Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.

  7. A Bee Evolutionary Guiding Nondominated Sorting Genetic Algorithm II for Multiobjective Flexible Job-Shop Scheduling.

    PubMed

    Deng, Qianwang; Gong, Guiliang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua

    2017-01-01

    Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N , in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed.

  8. A Bee Evolutionary Guiding Nondominated Sorting Genetic Algorithm II for Multiobjective Flexible Job-Shop Scheduling

    PubMed Central

    Deng, Qianwang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua

    2017-01-01

    Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N, in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed. PMID:28458687

  9. ACT Payload Shroud Structural Concept Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Zalewski, Bart B.; Bednarcyk, Brett A.

    2010-01-01

    Aerospace structural applications demand a weight efficient design to perform in a cost effective manner. This is particularly true for launch vehicle structures, where weight is the dominant design driver. The design process typically requires many iterations to ensure that a satisfactory minimum weight has been obtained. Although metallic structures can be weight efficient, composite structures can provide additional weight savings due to their lower density and additional design flexibility. This work presents structural analysis and weight optimization of a composite payload shroud for NASA s Ares V heavy lift vehicle. Two concepts, which were previously determined to be efficient for such a structure are evaluated: a hat stiffened/corrugated panel and a fiber reinforced foam sandwich panel. A composite structural optimization code, HyperSizer, is used to optimize the panel geometry, composite material ply orientations, and sandwich core material. HyperSizer enables an efficient evaluation of thousands of potential designs versus multiple strength and stability-based failure criteria across multiple load cases. HyperSizer sizing process uses a global finite element model to obtain element forces, which are statistically processed to arrive at panel-level design-to loads. These loads are then used to analyze each candidate panel design. A near optimum design is selected as the one with the lowest weight that also provides all positive margins of safety. The stiffness of each newly sized panel or beam component is taken into account in the subsequent finite element analysis. Iteration of analysis/optimization is performed to ensure a converged design. Sizing results for the hat stiffened panel concept and the fiber reinforced foam sandwich concept are presented.

  10. Iterative dataset optimization in automated planning: Implementation for breast and rectal cancer radiotherapy.

    PubMed

    Fan, Jiawei; Wang, Jiazhou; Zhang, Zhen; Hu, Weigang

    2017-06-01

    To develop a new automated treatment planning solution for breast and rectal cancer radiotherapy. The automated treatment planning solution developed in this study includes selection of the iterative optimized training dataset, dose volume histogram (DVH) prediction for the organs at risk (OARs), and automatic generation of clinically acceptable treatment plans. The iterative optimized training dataset is selected by an iterative optimization from 40 treatment plans for left-breast and rectal cancer patients who received radiation therapy. A two-dimensional kernel density estimation algorithm (noted as two parameters KDE) which incorporated two predictive features was implemented to produce the predicted DVHs. Finally, 10 additional new left-breast treatment plans are re-planned using the Pinnacle 3 Auto-Planning (AP) module (version 9.10, Philips Medical Systems) with the objective functions derived from the predicted DVH curves. Automatically generated re-optimized treatment plans are compared with the original manually optimized plans. By combining the iterative optimized training dataset methodology and two parameters KDE prediction algorithm, our proposed automated planning strategy improves the accuracy of the DVH prediction. The automatically generated treatment plans using the dose derived from the predicted DVHs can achieve better dose sparing for some OARs without compromising other metrics of plan quality. The proposed new automated treatment planning solution can be used to efficiently evaluate and improve the quality and consistency of the treatment plans for intensity-modulated breast and rectal cancer radiation therapy. © 2017 American Association of Physicists in Medicine.

  11. Seismic Design of ITER Component Cooling Water System-1 Piping

    NASA Astrophysics Data System (ADS)

    Singh, Aditya P.; Jadhav, Mahesh; Sharma, Lalit K.; Gupta, Dinesh K.; Patel, Nirav; Ranjan, Rakesh; Gohil, Guman; Patel, Hiren; Dangi, Jinendra; Kumar, Mohit; Kumar, A. G. A.

    2017-04-01

    The successful performance of ITER machine very much depends upon the effective removal of heat from the in-vessel components and other auxiliary systems during Tokamak operation. This objective will be accomplished by the design of an effective Cooling Water System (CWS). The optimized piping layout design is an important element in CWS design and is one of the major design challenges owing to the factors of large thermal expansion and seismic accelerations; considering safety, accessibility and maintainability aspects. An important sub-system of ITER CWS, Component Cooling Water System-1 (CCWS-1) has very large diameter of pipes up to DN1600 with many intersections to fulfill the process flow requirements of clients for heat removal. Pipe intersection is the weakest link in the layout due to high stress intensification factor. CCWS-1 piping up to secondary confinement isolation valves as well as in-between these isolation valves need to survive a Seismic Level-2 (SL-2) earthquake during the Tokamak operation period to ensure structural stability of the system in the Safe Shutdown Earthquake (SSE) event. This paper presents the design, qualification and optimization of layout of ITER CCWS-1 loop to withstand SSE event combined with sustained and thermal loads as per the load combinations defined by ITER and allowable limits as per ASME B31.3, This paper also highlights the Modal and Response Spectrum Analyses done to find out the natural frequency and system behavior during the seismic event.

  12. Optimized iterative decoding method for TPC coded CPM

    NASA Astrophysics Data System (ADS)

    Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei

    2018-05-01

    Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and optimization on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the optimal factors for the system. The experiments show our method is efficient to improve the convergence performance.

  13. Improving Drive Files for Vehicle Road Simulations

    NASA Astrophysics Data System (ADS)

    Cherng, John G.; Goktan, Ali; French, Mark; Gu, Yi; Jacob, Anil

    2001-09-01

    Shaker tables are commonly used in laboratories for automotive vehicle component testing to study durability and acoustics performance. An example is development testing of car seats. However, it is difficult to repeat the measured road data perfectly with the response of a shaker table as there are basic differences in dynamic characteristics between a flexible vehicle and substantially rigid shaker table. In addition, there are performance limits in the shaker table drive systems that can limit correlation. In practice, an optimal drive signal for the actuators is created iteratively. During each iteration, the error between the road data and the response data is minimised by an optimising algorithm which is generally a part of the feed back loop of the shake table controller. This study presents a systematic investigation to the errors in time and frequency domains as well as joint time-frequency domain and an evaluation of different digital signal processing techniques that have been used in previous work. In addition, we present an innovative approach that integrates the dynamic characteristics of car seats and the human body into the error-minimising iteration process. We found that the iteration process can be shortened and the error reduced by using a weighting function created by normalising the frequency response function of the car seat. Two road data test sets were used in the study.

  14. A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation

    DOE PAGES

    Wang, Zeyu; Wang, Jianhui; Chen, Chen

    2016-12-07

    Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraintmore » is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.« less

  15. A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zeyu; Wang, Jianhui; Chen, Chen

    Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraintmore » is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.« less

  16. Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.

    PubMed

    Dastmalchi, Pouya; Veronis, Georgios

    2013-12-30

    We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.

  17. Development of an evidence-based review with recommendations using an online iterative process.

    PubMed

    Rudmik, Luke; Smith, Timothy L

    2011-01-01

    The practice of modern medicine is governed by evidence-based principles. Due to the plethora of medical literature, clinicians often rely on systematic reviews and clinical guidelines to summarize the evidence and provide best practices. Implementation of an evidence-based clinical approach can minimize variation in health care delivery and optimize the quality of patient care. This article reports a method for developing an "Evidence-based Review with Recommendations" using an online iterative process. The manuscript describes the following steps involved in this process: Clinical topic selection, Evidence-hased review assignment, Literature review and initial manuscript preparation, Iterative review process with author selection, and Manuscript finalization. The goal of this article is to improve efficiency and increase the production of evidence-based reviews while maintaining the high quality and transparency associated with the rigorous methodology utilized for clinical guideline development. With the rise of evidence-based medicine, most medical and surgical specialties have an abundance of clinical topics which would benefit from a formal evidence-based review. Although clinical guideline development is an important methodology, the associated challenges limit development to only the absolute highest priority clinical topics. As outlined in this article, the online iterative approach to the development of an Evidence-based Review with Recommendations may improve productivity without compromising the quality associated with formal guideline development methodology. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.

  18. Hardware architecture design of image restoration based on time-frequency domain computation

    NASA Astrophysics Data System (ADS)

    Wen, Bo; Zhang, Jing; Jiao, Zipeng

    2013-10-01

    The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.

  19. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  20. Approximation of the Newton Step by a Defect Correction Process

    NASA Technical Reports Server (NTRS)

    Arian, E.; Batterman, A.; Sachs, E. W.

    1999-01-01

    In this paper, an optimal control problem governed by a partial differential equation is considered. The Newton step for this system can be computed by solving a coupled system of equations. To do this efficiently with an iterative defect correction process, a modifying operator is introduced into the system. This operator is motivated by local mode analysis. The operator can be used also for preconditioning in Generalized Minimum Residual (GMRES). We give a detailed convergence analysis for the defect correction process and show the derivation of the modifying operator. Numerical tests are done on the small disturbance shape optimization problem in two dimensions for the defect correction process and for GMRES.

  1. Iterative pass optimization of sequence data

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  2. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    NASA Astrophysics Data System (ADS)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.

  3. A minimization method on the basis of embedding the feasible set and the epigraph

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.

    2016-11-01

    We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.

  4. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  5. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    PubMed Central

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352

  6. An iterative approach for the optimization of pavement maintenance management at the network level.

    PubMed

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  7. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  8. Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks

    PubMed Central

    Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng

    2017-01-01

    High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328

  9. Iterative dip-steering median filter

    NASA Astrophysics Data System (ADS)

    Huo, Shoudong; Zhu, Weihong; Shi, Taikun

    2017-09-01

    Seismic data are always contaminated with high noise components, which present processing challenges especially for signal preservation and its true amplitude response. This paper deals with an extension of the conventional median filter, which is widely used in random noise attenuation. It is known that the standard median filter works well with laterally aligned coherent events but cannot handle steep events, especially events with conflicting dips. In this paper, an iterative dip-steering median filter is proposed for the attenuation of random noise in the presence of multiple dips. The filter first identifies the dominant dips inside an optimized processing window by a Fourier-radial transform in the frequency-wavenumber domain. The optimum size of the processing window depends on the intensity of random noise that needs to be attenuated and the amount of signal to be preserved. It then applies median filter along the dominant dip and retains the signals. Iterations are adopted to process the residual signals along the remaining dominant dips in a descending sequence, until all signals have been retained. The method is tested by both synthetic and field data gathers and also compared with the commonly used f-k least squares de-noising and f-x deconvolution.

  10. Accelerated iterative beam angle selection in IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bangert, Mark, E-mail: m.bangert@dkfz.de; Unkelbach, Jan

    2016-03-15

    Purpose: Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n − 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based onmore » surrogates for the FMO objective function value. Methods: We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Results: Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. Conclusions: We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.« less

  11. Accelerated iterative beam angle selection in IMRT.

    PubMed

    Bangert, Mark; Unkelbach, Jan

    2016-03-01

    Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n - 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based on surrogates for the FMO objective function value. We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.

  12. User input in iterative design for prevention product development: leveraging interdisciplinary methods to optimize effectiveness.

    PubMed

    Guthrie, Kate M; Rosen, Rochelle K; Vargas, Sara E; Guillen, Melissa; Steger, Arielle L; Getz, Melissa L; Smith, Kelley A; Ramirez, Jaime J; Kojic, Erna M

    2017-10-01

    The development of HIV-preventive topical vaginal microbicides has been challenged by a lack of sufficient adherence in later stage clinical trials to confidently evaluate effectiveness. This dilemma has highlighted the need to integrate translational research earlier in the drug development process, essentially applying behavioral science to facilitate the advances of basic science with respect to the uptake and use of biomedical prevention technologies. In the last several years, there has been an increasing recognition that the user experience, specifically the sensory experience, as well as the role of meaning-making elicited by those sensations, may play a more substantive role than previously thought. Importantly, the role of the user-their sensory perceptions, their judgements of those experiences, and their willingness to use a product-is critical in product uptake and consistent use post-marketing, ultimately realizing gains in global public health. Specifically, a successful prevention product requires an efficacious drug, an efficient drug delivery system, and an effective user. We present an integrated iterative drug development and user experience evaluation method to illustrate how user-centered formulation design can be iterated from the early stages of preclinical development to leverage the user experience. Integrating the user and their product experiences into the formulation design process may help optimize both the efficiency of drug delivery and the effectiveness of the user.

  13. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part I

    NASA Technical Reports Server (NTRS)

    Boyer, Charles M.; Jackson, Trevor P.; Beyon, Jeffrey Y.; Petway, Larry B.

    2013-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Mechanical placement collaboration reduced potential electromagnetic interference (EMI). Through application of newly selected electrical components and thermal analysis data, a total electronic chassis redesign was accomplished. Use of an innovative forced convection tunnel heat sink was employed to meet and exceed project requirements for cooling, mass reduction, and volume reduction. Functionality was a key concern to make efficient use of airflow, and accessibility was also imperative to allow for servicing of chassis internals. The collaborative process provided for accelerated design maturation with substantiated function.

  14. Analyzing the Effect of Multi-fuel and Practical Constraints on Realistic Economic Load Dispatch using Novel Two-stage PSO

    NASA Astrophysics Data System (ADS)

    Chintalapudi, V. S.; Sirigiri, Sivanagaraju

    2017-04-01

    In power system restructuring, pricing the electrical power plays a vital role in cost allocation between suppliers and consumers. In optimal power dispatch problem, not only the cost of active power generation but also the costs of reactive power generated by the generators should be considered to increase the effectiveness of the problem. As the characteristics of reactive power cost curve are similar to that of active power cost curve, a nonconvex reactive power cost function is formulated. In this paper, a more realistic multi-fuel total cost objective is formulated by considering active and reactive power costs of generators. The formulated cost function is optimized by satisfying equality, in-equality and practical constraints using the proposed uniform distributed two-stage particle swarm optimization. The proposed algorithm is a combination of uniform distribution of control variables (to start the iterative process with good initial value) and two-stage initialization processes (to obtain best final value in less number of iterations) can enhance the effectiveness of convergence characteristics. Obtained results for the considered standard test functions and electrical systems indicate the effectiveness of the proposed algorithm and can obtain efficient solution when compared to existing methods. Hence, the proposed method is a promising method and can be easily applied to optimize the power system objectives.

  15. Design Optimization Programmable Calculators versus Campus Computers.

    ERIC Educational Resources Information Center

    Savage, Michael

    1982-01-01

    A hypothetical design optimization problem and technical information on the three design parameters are presented. Although this nested iteration problem can be solved on a computer (flow diagram provided), this article suggests that several hand held calculators can be used to perform the same design iteration. (SK)

  16. An Algorithm for the Mixed Transportation Network Design Problem

    PubMed Central

    Liu, Xinyu; Chen, Qun

    2016-01-01

    This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803

  17. Policy Iteration for $H_\\infty $ Optimal Control of Polynomial Nonlinear Systems via Sum of Squares Programming.

    PubMed

    Zhu, Yuanheng; Zhao, Dongbin; Yang, Xiong; Zhang, Qichao

    2018-02-01

    Sum of squares (SOS) polynomials have provided a computationally tractable way to deal with inequality constraints appearing in many control problems. It can also act as an approximator in the framework of adaptive dynamic programming. In this paper, an approximate solution to the optimal control of polynomial nonlinear systems is proposed. Under a given attenuation coefficient, the Hamilton-Jacobi-Isaacs equation is relaxed to an optimization problem with a set of inequalities. After applying the policy iteration technique and constraining inequalities to SOS, the optimization problem is divided into a sequence of feasible semidefinite programming problems. With the converged solution, the attenuation coefficient is further minimized to a lower value. After iterations, approximate solutions to the smallest -gain and the associated optimal controller are obtained. Four examples are employed to verify the effectiveness of the proposed algorithm.

  18. Simulation Modeling to Compare High-Throughput, Low-Iteration Optimization Strategies for Metabolic Engineering

    PubMed Central

    Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.

    2018-01-01

    Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690

  19. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    PubMed

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  20. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  1. Coupled Solid Rocket Motor Ballistics and Trajectory Modeling for Higher Fidelity Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Ables, Brett

    2014-01-01

    Multi-stage launch vehicles with solid rocket motors (SRMs) face design optimization challenges, especially when the mission scope changes frequently. Significant performance benefits can be realized if the solid rocket motors are optimized to the changing requirements. While SRMs represent a fixed performance at launch, rapid design iterations enable flexibility at design time, yielding significant performance gains. The streamlining and integration of SRM design and analysis can be achieved with improved analysis tools. While powerful and versatile, the Solid Performance Program (SPP) is not conducive to rapid design iteration. Performing a design iteration with SPP and a trajectory solver is a labor intensive process. To enable a better workflow, SPP, the Program to Optimize Simulated Trajectories (POST), and the interfaces between them have been improved and automated, and a graphical user interface (GUI) has been developed. The GUI enables real-time visual feedback of grain and nozzle design inputs, enforces parameter dependencies, removes redundancies, and simplifies manipulation of SPP and POST's numerous options. Automating the analysis also simplifies batch analyses and trade studies. Finally, the GUI provides post-processing, visualization, and comparison of results. Wrapping legacy high-fidelity analysis codes with modern software provides the improved interface necessary to enable rapid coupled SRM ballistics and vehicle trajectory analysis. Low cost trade studies demonstrate the sensitivities of flight performance metrics to propulsion characteristics. Incorporating high fidelity analysis from SPP into vehicle design reduces performance margins and improves reliability. By flying an SRM designed with the same assumptions as the rest of the vehicle, accurate comparisons can be made between competing architectures. In summary, this flexible workflow is a critical component to designing a versatile launch vehicle model that can accommodate a volatile mission scope.

  2. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    NASA Astrophysics Data System (ADS)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  3. Robust multi-site MR data processing: iterative optimization of bias correction, tissue classification, and registration.

    PubMed

    Young Kim, Eun; Johnson, Hans J

    2013-01-01

    A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.

  4. Kernel-based least squares policy iteration for reinforcement learning.

    PubMed

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating an initial controller to ensure online performance.

  5. Guided Iterative Substructure Search (GI-SSS) - A New Trick for an Old Dog.

    PubMed

    Weskamp, Nils

    2016-07-01

    Substructure search (SSS) is a fundamental technique supported by various chemical information systems. Many users apply it in an iterative manner: they modify their queries to shape the composition of the retrieved hit sets according to their needs. We propose and evaluate two heuristic extensions of SSS aimed at simplifying these iterative query modifications by collecting additional information during query processing and visualizing this information in an intuitive way. This gives the user a convenient feedback on how certain changes to the query would affect the retrieved hit set and reduces the number of trial-and-error cycles needed to generate an optimal search result. The proposed heuristics are simple, yet surprisingly effective and can be easily added to existing SSS implementations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Multi-objective optimization of GENIE Earth system models.

    PubMed

    Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J

    2009-07-13

    The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.

  7. Application of optimized multiscale mathematical morphology for bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Gong, Tingkai; Yuan, Yanbin; Yuan, Xiaohui; Wu, Xiaotao

    2017-04-01

    In order to suppress noise effectively and extract the impulsive features in the vibration signals of faulty rolling element bearings, an optimized multiscale morphology (OMM) based on conventional multiscale morphology (CMM) and iterative morphology (IM) is presented in this paper. Firstly, the operator used in the IM method must be non-idempotent; therefore, an optimized difference (ODIF) operator has been designed. Furthermore, in the iterative process the current operation is performed on the basis of the previous one. This means that if a larger scale is employed, more fault features are inhibited. Thereby, a unit scale is proposed as the structuring element (SE) scale in IM. According to the above definitions, the IM method is implemented on the results over different scales obtained by CMM. The validity of the proposed method is first evaluated by a simulated signal. Subsequently, aimed at an outer race fault two vibration signals sampled by different accelerometers are analyzed by OMM and CMM, respectively. The same is done for an inner race fault. The results show that the optimized method is effective in diagnosing the two bearing faults. Compared with the CMM method, the OMM method can extract much more fault features under strong noise background.

  8. Applicability of the iterative technique for cardiac resynchronization therapy optimization: full-disclosure, 50-sequential-patient dataset of transmitral Doppler traces, with implications for future research design and guidelines.

    PubMed

    Jones, Siana; Shun-Shin, Matthew J; Cole, Graham D; Sau, Arunashis; March, Katherine; Williams, Suzanne; Kyriacou, Andreas; Hughes, Alun D; Mayet, Jamil; Frenneaux, Michael; Manisty, Charlotte H; Whinnett, Zachary I; Francis, Darrel P

    2014-04-01

    Full-disclosure study describing Doppler patterns during iterative atrioventricular delay (AVD) optimization of biventricular pacemakers (cardiac resynchronization therapy, CRT). Doppler traces of the first 50 eligible patients undergoing iterative Doppler AVD optimization in the BRAVO trial were examined. Three experienced observers classified conformity to guideline-described patterns. Each observer then selected the optimum AVD on two separate occasions: blinded and unblinded to AVD. Four Doppler E-A patterns occurred: A (always merged, 18% of patients), B (incrementally less fusion at short AVDs, 12%), C (full separation at short AVDs, as described by the guidelines, 28%), and D (always separated, 42%). In Groups A and D (60%), the iterative guidelines therefore cannot specify one single AVD. On the kappa scale (0 = chance alone; 1 = perfect agreement), observer agreement for the ideal AVD in Classes B and C was poor (0.32) and appeared worse in Groups A and D (0.22). Blinding caused the scattering of the AVD selected as optimal to widen (standard deviation rising from 37 to 49 ms, P < 0.001). By blinding 28% of the selected optimum AVDs were ≤60 or ≥200 ms. All 50 Doppler datasets are presented, to support future methodological testing. In most patients, the iterative method does not clearly specify one AVD. In all the patients, agreement on the ideal AVD between skilled observers viewing identical images is poor. The iterative protocol may successfully exclude some extremely unsuitable AVDs, but so might simply accepting factory default. Irreproducibility of the gold standard also prevents alternative physiological optimization methods from being validated honestly.

  9. To the theory of high-power gyrotrons with uptapered resonators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumbrajs, O.; Nusinovich, G. S.

    In high-power gyrotrons it is desirable to combine an optimal resonator length with the optimal value of the resonator quality factor. In resonators with the constant radius of the central part, the possibilities of this combination are limited because the quality factor of the resonator sharply increases with its length. Therefore the attempts to increase the length for maximizing the efficiency leads to such increase in the quality factor which makes the optimal current too small. Resonators with slightly uptapered profiles offer more flexibility in this regard. In such resonators, one can separate optimization of the interaction length from optimizationmore » of the quality factor because the quality factor determined by diffractive losses can be reduced by increasing the angle of uptapering. In the present paper, these issues are analyzed by studying as a typical high-power 17 GHz gyrotron which is currently under development in Europe for ITER (http://en.wikipedia.org/wiki/ITER). The effect of a slight uptapering of the resonator wall on the efficiency enhancement and the purity of the radiation spectrum in the process of the gyrotron start-up and power modulation are studied. Results show that optimal modification of the shape of a slightly uptapered resonator may result in increasing the gyrotron power from 1052 to 1360 kW.« less

  10. A comparison of multiprocessor scheduling methods for iterative data flow architectures

    NASA Technical Reports Server (NTRS)

    Storch, Matthew

    1993-01-01

    A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.

  11. A new design approach to innovative spectrometers. Case study: TROPOLITE

    NASA Astrophysics Data System (ADS)

    Volatier, Jean-Baptiste; Baümer, Stefan; Kruizinga, Bob; Vink, Rob

    2014-05-01

    Designing a novel optical system is a nested iterative process. The optimization loop, from a starting point to final system is already mostly automated. However this loop is part of a wider loop which is not. This wider loop starts with an optical specification and ends with a manufacturability assessment. When designing a new spectrometer with emphasis on weight and cost, numerous iterations between the optical- and mechanical designer are inevitable. The optical designer must then be able to reliably produce optical designs based on new input gained from multidisciplinary studies. This paper presents a procedure that can automatically generate new starting points based on any kind of input or new constraint that might arise. These starting points can then be handed over to a generic optimization routine to make the design tasks extremely efficient. The optical designer job is then not to design optical systems, but to meta-design a procedure that produces optical systems paving the way for system level optimization. We present here this procedure and its application to the design of TROPOLITE a lightweight push broom imaging spectrometer.

  12. Application of iterative robust model-based optimal experimental design for the calibration of biocatalytic models.

    PubMed

    Van Daele, Timothy; Gernaey, Krist V; Ringborg, Rolf H; Börner, Tim; Heintz, Søren; Van Hauwermeiren, Daan; Grey, Carl; Krühne, Ulrich; Adlercreutz, Patrick; Nopens, Ingmar

    2017-09-01

    The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during experimentation is not actively used to optimize the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω-transaminase catalyzed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is not only more accurate but also a computationally more expensive method. As a result, an important deviation between both approaches is found, confirming that linearization methods should be applied with care for nonlinear models. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1278-1293, 2017. © 2017 American Institute of Chemical Engineers.

  13. A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters

    PubMed Central

    Wang, Zhihao; Yi, Jing

    2016-01-01

    For the shortcoming of fuzzy c-means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule n and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result. PMID:28042291

  14. Iterative channel decoding of FEC-based multiple-description codes.

    PubMed

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  15. Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks

    PubMed Central

    Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng

    2017-01-01

    In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement. PMID:28677636

  16. Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks.

    PubMed

    Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng

    2017-07-04

    In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement.

  17. Aeroelastic optimization methodology for viscous and turbulent flows

    NASA Astrophysics Data System (ADS)

    Barcelos Junior, Manuel Nascimento Dias

    2007-12-01

    In recent years, the development of faster computers and parallel processing allowed the application of high-fidelity analysis methods to the aeroelastic design of aircraft. However, these methods are restricted to the final design verification, mainly due to the computational cost involved in iterative design processes. Therefore, this work is concerned with the creation of a robust and efficient aeroelastic optimization methodology for inviscid, viscous and turbulent flows by using high-fidelity analysis and sensitivity analysis techniques. Most of the research in aeroelastic optimization, for practical reasons, treat the aeroelastic system as a quasi-static inviscid problem. In this work, as a first step toward the creation of a more complete aeroelastic optimization methodology for realistic problems, an analytical sensitivity computation technique was developed and tested for quasi-static aeroelastic viscous and turbulent flow configurations. Viscous and turbulent effects are included by using an averaged discretization of the Navier-Stokes equations, coupled with an eddy viscosity turbulence model. For quasi-static aeroelastic problems, the traditional staggered solution strategy has unsatisfactory performance when applied to cases where there is a strong fluid-structure coupling. Consequently, this work also proposes a solution methodology for aeroelastic and sensitivity analyses of quasi-static problems, which is based on the fixed point of an iterative nonlinear block Gauss-Seidel scheme. The methodology can also be interpreted as the solution of the Schur complement of the aeroelastic and sensitivity analyses linearized systems of equations. The methodologies developed in this work are tested and verified by using realistic aeroelastic systems.

  18. Continuous Firefly Algorithm for Optimal Tuning of Pid Controller in Avr System

    NASA Astrophysics Data System (ADS)

    Bendjeghaba, Omar

    2014-01-01

    This paper presents a tuning approach based on Continuous firefly algorithm (CFA) to obtain the proportional-integral- derivative (PID) controller parameters in Automatic Voltage Regulator system (AVR). In the tuning processes the CFA is iterated to reach the optimal or the near optimal of PID controller parameters when the main goal is to improve the AVR step response characteristics. Conducted simulations show the effectiveness and the efficiency of the proposed approach. Furthermore the proposed approach can improve the dynamic of the AVR system. Compared with particle swarm optimization (PSO), the new CFA tuning method has better control system performance in terms of time domain specifications and set-point tracking.

  19. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  20. A novel dynamical community detection algorithm based on weighting scheme

    NASA Astrophysics Data System (ADS)

    Li, Ju; Yu, Kai; Hu, Ke

    2015-12-01

    Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.

  1. MO-B-BRB-02: Maintain the Quality of Treatment Planning for Time-Constraint Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, J.

    The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dengwang; Wang, Jie; Kapp, Daniel S.

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data weremore » segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is supported by NIH/NIBIB (1R01-EB016777), National Natural Science Foundation of China (No.61471226 and No.61201441), Research funding from Shandong Province (No.BS2012DX038 and No.J12LN23), and Research funding from Jinan City (No.201401221 and No.20120109)« less

  3. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  4. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  5. An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics

    NASA Technical Reports Server (NTRS)

    Baluja, Shumeet

    1995-01-01

    This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.

  6. Assay Development Process | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    Typical steps involved in the development of a  mass spectrometry-based targeted assay include: (1) selection of surrogate or signature peptides corresponding to the targeted protein or modification of interest; (2) iterative optimization of instrument and method parameters for optimal detection of the selected peptide; (3) method development for protein extraction from biological matrices such as tissue, whole cell lysates, or blood plasma/serum and proteolytic digestion of proteins (usually with trypsin); (4) evaluation of the assay in the intended biological matrix to determine if e

  7. Parallel AFSA algorithm accelerating based on MIC architecture

    NASA Astrophysics Data System (ADS)

    Zhou, Junhao; Xiao, Hong; Huang, Yifan; Li, Yongzhao; Xu, Yuanrui

    2017-05-01

    Analysis AFSA past for solving the traveling salesman problem, the algorithm efficiency is often a big problem, and the algorithm processing method, it does not fully responsive to the characteristics of the traveling salesman problem to deal with, and therefore proposes a parallel join improved AFSA process. The simulation with the current TSP known optimal solutions were analyzed, the results showed that the AFSA iterations improved less, on the MIC cards doubled operating efficiency, efficiency significantly.

  8. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    NASA Astrophysics Data System (ADS)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  9. A Novel Iterative Scheme for the Very Fast and Accurate Solution of Non-LTE Radiative Transfer Problems

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, J.; Fabiani Bendicho, P.

    1995-12-01

    Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.

  10. Designing a freeform optic for oblique illumination

    NASA Astrophysics Data System (ADS)

    Uthoff, Ross D.; Ulanch, Rachel N.; Williams, Kaitlyn E.; Ruiz Diaz, Liliana; King, Page; Koshel, R. John

    2017-11-01

    The Functional Freeform Fitting (F4) method is utilized to design a freeform optic for oblique illumination of Mark Rothko's Green on Blue (1956). Shown are preliminary results from an iterative freeform design process; from problem definition and specification development to surface fit, ray tracing results, and optimization. This method is applicable to both point and extended sources of various geometries.

  11. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  12. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part II

    NASA Technical Reports Server (NTRS)

    Crasner, Aaron I.; Scola,Salvatore; Beyon, Jeffrey Y.; Petway, Larry B.

    2014-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Thermal modeling software was used to run steady state thermal analyses, which were used to both validate the designs and recommend further changes. Analyses were run on each redesign, as well as the original system. Thermal Desktop was used to run trade studies to account for uncertainty and assumptions about fan performance and boundary conditions. The studies suggested that, even if the assumptions were significantly wrong, the redesigned systems would remain within operating temperature limits.

  13. Sensitivity-Based Guided Model Calibration

    NASA Astrophysics Data System (ADS)

    Semnani, M.; Asadzadeh, M.

    2017-12-01

    A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.

  14. Optimization of a Lunar Pallet Lander Reinforcement Structure Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Burt, Adam O.; Hull, Patrick V.

    2014-01-01

    This paper presents a design automation process using optimization via a genetic algorithm to design the conceptual structure of a Lunar Pallet Lander. The goal is to determine a design that will have the primary natural frequencies at or above a target value as well as minimize the total mass. Several iterations of the process are presented. First, a concept optimization is performed to determine what class of structure would produce suitable candidate designs. From this a stiffened sheet metal approach was selected leading to optimization of beam placement through generating a two-dimensional mesh and varying the physical location of reinforcing beams. Finally, the design space is reformulated as a binary problem using 1-dimensional beam elements to truncate the design space to allow faster convergence and additional mechanical failure criteria to be included in the optimization responses. Results are presented for each design space configuration. The final flight design was derived from these results.

  15. Traffic Aware Planner for Cockpit-Based Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Woods, Sharon E.; Vivona, Robert A.; Henderson, Jeffrey; Wing, David J.; Burke, Kelly A.

    2016-01-01

    The Traffic Aware Planner (TAP) software application is a cockpit-based advisory tool designed to be hosted on an Electronic Flight Bag and to enable and test the NASA concept of Traffic Aware Strategic Aircrew Requests (TASAR). The TASAR concept provides pilots with optimized route changes (including altitude) that reduce fuel burn and/or flight time, avoid interactions with known traffic, weather and restricted airspace, and may be used by the pilots to request a route and/or altitude change from Air Traffic Control. Developed using an iterative process, TAP's latest improvements include human-machine interface design upgrades and added functionality based on the results of human-in-the-loop simulation experiments and flight trials. Architectural improvements have been implemented to prepare the system for operational-use trials with partner commercial airlines. Future iterations will enhance coordination with airline dispatch and add functionality to improve the acceptability of TAP-generated route-change requests to pilots, dispatchers, and air traffic controllers.

  16. Blind One-Bit Compressive Sampling

    DTIC Science & Technology

    2013-01-17

    14] Q. Li, C. A. Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse...methods for nonconvex optimization on the unit sphere and has a provable convergence guarantees. Binary iterative hard thresholding (BIHT) algorithms were... Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0

  17. Robust resolution enhancement optimization methods to process variations based on vector imaging model

    NASA Astrophysics Data System (ADS)

    Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong

    2012-03-01

    Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.

  18. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  19. Multicriteria hierarchical iterative interactive algorithm for organizing operational modes of large heat supply systems

    NASA Astrophysics Data System (ADS)

    Korotkova, T. I.; Popova, V. I.

    2017-11-01

    The generalized mathematical model of decision-making in the problem of planning and mode selection providing required heat loads in a large heat supply system is considered. The system is multilevel, decomposed into levels of main and distribution heating networks with intermediate control stages. Evaluation of the effectiveness, reliability and safety of such a complex system is carried out immediately according to several indicators, in particular pressure, flow, temperature. This global multicriteria optimization problem with constraints is decomposed into a number of local optimization problems and the coordination problem. An agreed solution of local problems provides a solution to the global multicriterion problem of decision making in a complex system. The choice of the optimum operational mode of operation of a complex heat supply system is made on the basis of the iterative coordination process, which converges to the coordinated solution of local optimization tasks. The interactive principle of multicriteria task decision-making includes, in particular, periodic adjustment adjustments, if necessary, guaranteeing optimal safety, reliability and efficiency of the system as a whole in the process of operation. The degree of accuracy of the solution, for example, the degree of deviation of the internal air temperature from the required value, can also be changed interactively. This allows to carry out adjustment activities in the best way and to improve the quality of heat supply to consumers. At the same time, an energy-saving task is being solved to determine the minimum required values of heads at sources and pumping stations.

  20. Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo

    2018-04-01

    In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.

  1. Classification methodology for tritiated waste requiring interim storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cana, D.; Dall'ava, D.; Decanis, C.

    2015-03-15

    Fusion machines like the ITER experimental research facility will use tritium as fuel. Therefore, most of the solid radioactive waste will result not only from activation by 14 MeV neutrons, but also from contamination by tritium. As a consequence, optimizing the treatment process for waste containing tritium (tritiated waste) is a major challenge. This paper summarizes the studies conducted in France within the framework of the French national plan for the management of radioactive materials and waste. The paper recommends a reference program for managing this waste based on its sorting, treatment and packaging by the producer. It also recommendsmore » setting up a 50-year temporary storage facility to allow for tritium decay and designing future disposal facilities using tritiated radwaste characteristics as input data. This paper first describes this waste program and then details an optimized classification methodology which takes into account tritium decay over a 50-year storage period. The paper also describes a specific application for purely tritiated waste and discusses the set-up expected to be implemented for ITER decommissioning waste (current assumption). Comparison between this optimized approach and other viable detritiation techniques will be drawn. (authors)« less

  2. Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure

    NASA Astrophysics Data System (ADS)

    Bogani, C.; Gasparo, M. G.; Papini, A.

    2009-07-01

    We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.

  3. Iterative Reconstruction Techniques in Abdominopelvic CT: Technical Concepts and Clinical Implementation.

    PubMed

    Patino, Manuel; Fuentes, Jorge M; Singh, Sarabjeet; Hahn, Peter F; Sahani, Dushyant V

    2015-07-01

    This article discusses the clinical challenge of low-radiation-dose examinations, the commonly used approaches for dose optimization, and their effect on image quality. We emphasize practical aspects of the different iterative reconstruction techniques, along with their benefits, pitfalls, and clinical implementation. The widespread use of CT has raised concerns about potential radiation risks, motivating diverse strategies to reduce the radiation dose associated with CT. CT manufacturers have developed alternative reconstruction algorithms intended to improve image quality on dose-optimized CT studies, mainly through noise and artifact reduction. Iterative reconstruction techniques take unique approaches to noise reduction and provide distinct strength levels or settings.

  4. Optimized Deconvolution for Maximum Axial Resolution in Three-Dimensional Aberration-Corrected Scanning Transmission Electron Microscopy

    PubMed Central

    Ramachandra, Ranjan; de Jonge, Niels

    2012-01-01

    Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beltran, C; Kamal, H

    Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less

  6. A new approach to the human muscle model.

    PubMed

    Baildon, R W; Chapman, A E

    1983-01-01

    Hill's (1938) two component muscle model is used as basis for digital computer simulation of human muscular contraction by means of an iterative process. The contractile (CC) and series elastic (SEC) components are lumped components of structures which produce and transmit torque to the external environment. The CC is described in angular terms along four dimensions as a series of non-planar torque-angle-angular velocity surfaces stacked on top of each other, each surface being appropriate to a given level of muscular activation. The SEC is described similarly along dimensions of torque, angular stretch, overall muscle angular displacement and activation. The iterative process introduces negligible error and allows the mechanical outcome of a variety of normal muscular contractions to be evaluated parsimoniously. The model allows analysis of many aspects of muscle behaviour as well as optimization studies. Definition of relevant relations should also allow reproduction and prediction of the outcome of contractions in individuals.

  7. Systems and methods for optimal power flow on a radial network

    DOEpatents

    Low, Steven H.; Peng, Qiuyu

    2018-04-24

    Node controllers and power distribution networks in accordance with embodiments of the invention enable distributed power control. One embodiment includes a node controller including a distributed power control application; a plurality of node operating parameters describing the operating parameter of a node and a set of at least one node selected from the group consisting of an ancestor node and at least one child node; wherein send node operating parameters to nodes in the set of at least one node; receive operating parameters from the nodes in the set of at least one node; calculate a plurality of updated node operating parameters using an iterative process to determine the updated node operating parameters using the node operating parameters that describe the operating parameters of the node and the set of at least one node, where the iterative process involves evaluation of a closed form solution; and adjust node operating parameters.

  8. Optimal Decentralized Protocol for Electric Vehicle Charging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gan, LW; Topcu, U; Low, SH

    We propose a decentralized algorithm to optimally schedule electric vehicle (EV) charging. The algorithm exploits the elasticity of electric vehicle loads to fill the valleys in electric load profiles. We first formulate the EV charging scheduling problem as an optimal control problem, whose objective is to impose a generalized notion of valley-filling, and study properties of optimal charging profiles. We then give a decentralized algorithm to iteratively solve the optimal control problem. In each iteration, EVs update their charging profiles according to the control signal broadcast by the utility company, and the utility company alters the control signal to guidemore » their updates. The algorithm converges to optimal charging profiles (that are as "flat" as they can possibly be) irrespective of the specifications (e.g., maximum charging rate and deadline) of EVs, even if EVs do not necessarily update their charging profiles in every iteration, and use potentially outdated control signal when they update. Moreover, the algorithm only requires each EV solving its local problem, hence its implementation requires low computation capability. We also extend the algorithm to track a given load profile and to real-time implementation.« less

  9. Algorithms for the optimization of RBE-weighted dose in particle therapy.

    PubMed

    Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M

    2013-01-21

    We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.

  10. Algorithms for the optimization of RBE-weighted dose in particle therapy

    NASA Astrophysics Data System (ADS)

    Horcicka, M.; Meyer, C.; Buschbacher, A.; Durante, M.; Krämer, M.

    2013-01-01

    We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.

  11. Fast and Epsilon-Optimal Discretized Pursuit Learning Automata.

    PubMed

    Zhang, JunQi; Wang, Cheng; Zhou, MengChu

    2015-10-01

    Learning automata (LA) are powerful tools for reinforcement learning. A discretized pursuit LA is the most popular one among them. During an iteration its operation consists of three basic phases: 1) selecting the next action; 2) finding the optimal estimated action; and 3) updating the state probability. However, when the number of actions is large, the learning becomes extremely slow because there are too many updates to be made at each iteration. The increased updates are mostly from phases 1 and 3. A new fast discretized pursuit LA with assured ε -optimality is proposed to perform both phases 1 and 3 with the computational complexity independent of the number of actions. Apart from its low computational complexity, it achieves faster convergence speed than the classical one when operating in stationary environments. This paper can promote the applications of LA toward the large-scale-action oriented area that requires efficient reinforcement learning tools with assured ε -optimality, fast convergence speed, and low computational complexity for each iteration.

  12. Simulative design and process optimization of the two-stage stretch-blow molding process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopmann, Ch.; Rasche, S.; Windeck, C.

    2015-05-22

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development timemore » and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.« less

  13. Simulative design and process optimization of the two-stage stretch-blow molding process

    NASA Astrophysics Data System (ADS)

    Hopmann, Ch.; Rasche, S.; Windeck, C.

    2015-05-01

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.

  14. Optimal Averages for Nonlinear Signal Decompositions - Another Alternative for Empirical Mode Decomposition

    DTIC Science & Technology

    2014-10-01

    nonlinear and non-stationary signals. It aims at decomposing a signal, via an iterative sifting procedure, into several intrinsic mode functions ...stationary signals. It aims at decomposing a signal, via an iterative sifting procedure into several intrinsic mode functions (IMFs), and each of the... function , optimization. 1 Introduction It is well known that nonlinear and non-stationary signal analysis is important and difficult. His- torically

  15. Udzawa-type iterative method with parareal preconditioner for a parabolic optimal control problem

    NASA Astrophysics Data System (ADS)

    Lapin, A.; Romanenko, A.

    2016-11-01

    The article deals with the optimal control problem with the parabolic equation as state problem. There are point-wise constraints on the state and control functions. The objective functional involves the observation given in the domain at each moment. The conditions for convergence Udzawa's type iterative method are given. The parareal method to inverse preconditioner is given. The results of calculations are presented.

  16. Scenario-based fitted Q-iteration for adaptive control of water reservoir systems under uncertainty

    NASA Astrophysics Data System (ADS)

    Bertoni, Federica; Giuliani, Matteo; Castelletti, Andrea

    2017-04-01

    Over recent years, mathematical models have largely been used to support planning and management of water resources systems. Yet, the increasing uncertainties in their inputs - due to increased variability in the hydrological regimes - are a major challenge to the optimal operations of these systems. Such uncertainty, boosted by projected changing climate, violates the stationarity principle generally used for describing hydro-meteorological processes, which assumes time persisting statistical characteristics of a given variable as inferred by historical data. As this principle is unlikely to be valid in the future, the probability density function used for modeling stochastic disturbances (e.g., inflows) becomes an additional uncertain parameter of the problem, which can be described in a deterministic and set-membership based fashion. This study contributes a novel method for designing optimal, adaptive policies for controlling water reservoir systems under climate-related uncertainty. The proposed method, called scenario-based Fitted Q-Iteration (sFQI), extends the original Fitted Q-Iteration algorithm by enlarging the state space to include the space of the uncertain system's parameters (i.e., the uncertain climate scenarios). As a result, sFQI embeds the set-membership uncertainty of the future inflow scenarios in the action-value function and is able to approximate, with a single learning process, the optimal control policy associated to any scenario included in the uncertainty set. The method is demonstrated on a synthetic water system, consisting of a regulated lake operated for ensuring reliable water supply to downstream users. Numerical results show that the sFQI algorithm successfully identifies adaptive solutions to operate the system under different inflow scenarios, which outperform the control policy designed under historical conditions. Moreover, the sFQI policy generalizes over inflow scenarios not directly experienced during the policy design, thus alleviating the risk of mis-adaptation, namely the design of a solution fully adapted to a scenario that is different from the one that will actually realize.

  17. Regression analysis as a design optimization tool

    NASA Technical Reports Server (NTRS)

    Perley, R.

    1984-01-01

    The optimization concepts are described in relation to an overall design process as opposed to a detailed, part-design process where the requirements are firmly stated, the optimization criteria are well established, and a design is known to be feasible. The overall design process starts with the stated requirements. Some of the design criteria are derived directly from the requirements, but others are affected by the design concept. It is these design criteria that define the performance index, or objective function, that is to be minimized within some constraints. In general, there will be multiple objectives, some mutually exclusive, with no clear statement of their relative importance. The optimization loop that is given adjusts the design variables and analyzes the resulting design, in an iterative fashion, until the objective function is minimized within the constraints. This provides a solution, but it is only the beginning. In effect, the problem definition evolves as information is derived from the results. It becomes a learning process as we determine what the physics of the system can deliver in relation to the desirable system characteristics. As with any learning process, an interactive capability is a real attriubute for investigating the many alternatives that will be suggested as learning progresses.

  18. A Novel Particle Swarm Optimization Algorithm for Global Optimization

    PubMed Central

    Wang, Chun-Feng; Liu, Kui

    2016-01-01

    Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387

  19. Development of a stereo analysis algorithm for generating topographic maps using interactive techniques of the MPP

    NASA Technical Reports Server (NTRS)

    Strong, James P.

    1987-01-01

    A local area matching algorithm was developed on the Massively Parallel Processor (MPP). It is an iterative technique that first matches coarse or low resolution areas and at each iteration performs matches of higher resolution. Results so far show that when good matches are possible in the two images, the MPP algorithm matches corresponding areas as well as a human observer. To aid in developing this algorithm, a control or shell program was developed for the MPP that allows interactive experimentation with various parameters and procedures to be used in the matching process. (This would not be possible without the high speed of the MPP). With the system, optimal techniques can be developed for different types of matching problems.

  20. The trade-off between morphology and control in the co-optimized design of robots.

    PubMed

    Rosendo, Andre; von Atzigen, Marco; Iida, Fumiya

    2017-01-01

    Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques.

  1. The trade-off between morphology and control in the co-optimized design of robots

    PubMed Central

    Iida, Fumiya

    2017-01-01

    Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques. PMID:29023482

  2. Flutter optimization in fighter aircraft design

    NASA Technical Reports Server (NTRS)

    Triplett, W. E.

    1984-01-01

    The efficient design of aircraft structure involves a series of compromises among various engineering disciplines. These compromises are necessary to ensure the best overall design. To effectively reconcile the various technical constraints requires a number of design iterations, with the accompanying long elapsed time. Automated procedures can reduce the elapsed time, improve productivity and hold the promise of optimum designs which may be missed by batch processing. Several examples are given of optimization applications including aeroelastic constraints. Particular attention is given to the success or failure of each example and the lessons learned. The specific applications are shown. The final two applications were made recently.

  3. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.

  4. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  5. Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range

    NASA Technical Reports Server (NTRS)

    Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.

  6. Optimization of flexible wing structures subject to strength and induced drag constraints

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1977-01-01

    An optimization procedure for designing wing structures subject to stress, strain, and drag constraints is presented. The optimization method utilizes an extended penalty function formulation for converting the constrained problem into a series of unconstrained ones. Newton's method is used to solve the unconstrained problems. An iterative analysis procedure is used to obtain the displacements of the wing structure including the effects of load redistribution due to the flexibility of the structure. The induced drag is calculated from the lift distribution. Approximate expressions for the constraints used during major portions of the optimization process enhance the efficiency of the procedure. A typical fighter wing is used to demonstrate the procedure. Aluminum and composite material designs are obtained. The tradeoff between weight savings and drag reduction is investigated.

  7. How can students contribute? A qualitative study of active student involvement in development of technological learning material for clinical skills training.

    PubMed

    Haraldseid, Cecilie; Friberg, Febe; Aase, Karina

    2016-01-01

    Policy initiatives and an increasing amount of the literature within higher education both call for students to become more involved in creating their own learning. However, there is a lack of studies in undergraduate nursing education that actively involve students in developing such learning material with descriptions of the students' roles in these interactive processes. Explorative qualitative study, using data from focus group interviews, field notes and student notes. The data has been subjected to qualitative content analysis. Active student involvement through an iterative process identified five different learning needs that are especially important to the students: clarification of learning expectations, help to recognize the bigger picture, stimulation of interaction, creation of structure, and receiving context- specific content. The iterative process involvement of students during the development of new technological learning material will enhance the identification of important learning needs for students. The use of student and teacher knowledge through an adapted co-design process is the most optimal level of that involvement.

  8. Graph processing platforms at scale: practices and experiences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Lee, Sangkeun; Brown, Tyler C

    2015-01-01

    Graph analysis unveils hidden associations of data in many phenomena and artifacts, such as road network, social networks, genomic information, and scientific collaboration. Unfortunately, a wide diversity in the characteristics of graphs and graph operations make it challenging to find a right combination of tools and implementation of algorithms to discover desired knowledge from the target data set. This study presents an extensive empirical study of three representative graph processing platforms: Pegasus, GraphX, and Urika. Each system represents a combination of options in data model, processing paradigm, and infrastructure. We benchmarked each platform using three popular graph operations, degree distribution,more » connected components, and PageRank over a variety of real-world graphs. Our experiments show that each graph processing platform shows different strength, depending the type of graph operations. While Urika performs the best in non-iterative operations like degree distribution, GraphX outputforms iterative operations like connected components and PageRank. In addition, we discuss challenges to optimize the performance of each platform over large scale real world graphs.« less

  9. Noise models for low counting rate coherent diffraction imaging.

    PubMed

    Godard, Pierre; Allain, Marc; Chamard, Virginie; Rodenburg, John

    2012-11-05

    Coherent diffraction imaging (CDI) is a lens-less microscopy method that extracts the complex-valued exit field from intensity measurements alone. It is of particular importance for microscopy imaging with diffraction set-ups where high quality lenses are not available. The inversion scheme allowing the phase retrieval is based on the use of an iterative algorithm. In this work, we address the question of the choice of the iterative process in the case of data corrupted by photon or electron shot noise. Several noise models are presented and further used within two inversion strategies, the ordered subset and the scaled gradient. Based on analytical and numerical analysis together with Monte-Carlo studies, we show that any physical interpretations drawn from a CDI iterative technique require a detailed understanding of the relationship between the noise model and the used inversion method. We observe that iterative algorithms often assume implicitly a noise model. For low counting rates, each noise model behaves differently. Moreover, the used optimization strategy introduces its own artefacts. Based on this analysis, we develop a hybrid strategy which works efficiently in the absence of an informed initial guess. Our work emphasises issues which should be considered carefully when inverting experimental data.

  10. A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.

    2017-03-01

    Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.

  11. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  12. Approximate dynamic programming for optimal stationary control with control-dependent noise.

    PubMed

    Jiang, Yu; Jiang, Zhong-Ping

    2011-12-01

    This brief studies the stochastic optimal control problem via reinforcement learning and approximate/adaptive dynamic programming (ADP). A policy iteration algorithm is derived in the presence of both additive and multiplicative noise using Itô calculus. The expectation of the approximated cost matrix is guaranteed to converge to the solution of some algebraic Riccati equation that gives rise to the optimal cost value. Moreover, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, a numerical example is given to illustrate the efficiency of the proposed ADP methodology.

  13. Output-Feedback Control of Unknown Linear Discrete-Time Systems With Stochastic Measurement and Process Noise via Approximate Dynamic Programming.

    PubMed

    Wang, Jun-Sheng; Yang, Guang-Hong

    2017-07-25

    This paper studies the optimal output-feedback control problem for unknown linear discrete-time systems with stochastic measurement and process noise. A dithered Bellman equation with the innovation covariance matrix is constructed via the expectation operator given in the form of a finite summation. On this basis, an output-feedback-based approximate dynamic programming method is developed, where the terms depending on the innovation covariance matrix are available with the aid of the innovation covariance matrix identified beforehand. Therefore, by iterating the Bellman equation, the resulting value function can converge to the optimal one in the presence of the aforementioned noise, and the nearly optimal control laws are delivered. To show the effectiveness and the advantages of the proposed approach, a simulation example and a velocity control experiment on a dc machine are employed.

  14. Layout design-based research on optimization and assessment method for shipbuilding workshop

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Meng, Mei; Liu, Shuang

    2013-06-01

    The research study proposes to examine a three-dimensional visualization program, emphasizing on improving genetic algorithms through the optimization of a layout design-based standard and discrete shipbuilding workshop. By utilizing a steel processing workshop as an example, the principle of minimum logistic costs will be implemented to obtain an ideological equipment layout, and a mathematical model. The objectiveness is to minimize the total necessary distance traveled between machines. An improved control operator is implemented to improve the iterative efficiency of the genetic algorithm, and yield relevant parameters. The Computer Aided Tri-Dimensional Interface Application (CATIA) software is applied to establish the manufacturing resource base and parametric model of the steel processing workshop. Based on the results of optimized planar logistics, a visual parametric model of the steel processing workshop is constructed, and qualitative and quantitative adjustments then are applied to the model. The method for evaluating the results of the layout is subsequently established through the utilization of AHP. In order to provide a mode of reference to the optimization and layout of the digitalized production workshop, the optimized discrete production workshop will possess a certain level of practical significance.

  15. Vectorial mask optimization methods for robust optical lithography

    NASA Astrophysics Data System (ADS)

    Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong; Arce, Gonzalo R.

    2012-10-01

    Continuous shrinkage of critical dimension in an integrated circuit impels the development of resolution enhancement techniques for low k1 lithography. Recently, several pixelated optical proximity correction (OPC) and phase-shifting mask (PSM) approaches were developed under scalar imaging models to account for the process variations. However, the lithography systems with larger-NA (NA>0.6) are predominant for current technology nodes, rendering the scalar models inadequate to describe the vector nature of the electromagnetic field that propagates through the optical lithography system. In addition, OPC and PSM algorithms based on scalar models can compensate for wavefront aberrations, but are incapable of mitigating polarization aberrations in practical lithography systems, which can only be dealt with under the vector model. To this end, we focus on developing robust pixelated gradient-based OPC and PSM optimization algorithms aimed at canceling defocus, dose variation, wavefront and polarization aberrations under a vector model. First, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. A steepest descent algorithm is then used to iteratively optimize the mask patterns. Simulations show that the proposed algorithms can effectively improve the process windows of the optical lithography systems.

  16. Highly undersampled contrast-enhanced MRA with iterative reconstruction: Integration in a clinical setting.

    PubMed

    Stalder, Aurelien F; Schmidt, Michaela; Quick, Harald H; Schlamann, Marc; Maderwald, Stefan; Schmitt, Peter; Wang, Qiu; Nadar, Mariappan S; Zenge, Michael O

    2015-12-01

    To integrate, optimize, and evaluate a three-dimensional (3D) contrast-enhanced sparse MRA technique with iterative reconstruction on a standard clinical MR system. Data were acquired using a highly undersampled Cartesian spiral phyllotaxis sampling pattern and reconstructed directly on the MR system with an iterative SENSE technique. Undersampling, regularization, and number of iterations of the reconstruction were optimized and validated based on phantom experiments and patient data. Sparse MRA of the whole head (field of view: 265 × 232 × 179 mm(3) ) was investigated in 10 patient examinations. High-quality images with 30-fold undersampling, resulting in 0.7 mm isotropic resolution within 10 s acquisition, were obtained. After optimization of the regularization factor and of the number of iterations of the reconstruction, it was possible to reconstruct images with excellent quality within six minutes per 3D volume. Initial results of sparse contrast-enhanced MRA (CEMRA) in 10 patients demonstrated high-quality whole-head first-pass MRA for both the arterial and venous contrast phases. While sparse MRI techniques have not yet reached clinical routine, this study demonstrates the technical feasibility of high-quality sparse CEMRA of the whole head in a clinical setting. Sparse CEMRA has the potential to become a viable alternative where conventional CEMRA is too slow or does not provide sufficient spatial resolution. © 2014 Wiley Periodicals, Inc.

  17. Improved event positioning in a gamma ray detector using an iterative position-weighted centre-of-gravity algorithm.

    PubMed

    Liu, Chen-Yi; Goertzen, Andrew L

    2013-07-21

    An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.

  18. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  19. Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations

    PubMed Central

    Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad

    2013-01-01

    Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194

  20. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  1. Numerical optimization of actuator trajectories for ITER hybrid scenario profile evolution

    NASA Astrophysics Data System (ADS)

    van Dongen, J.; Felici, F.; Hogeweij, G. M. D.; Geelen, P.; Maljaars, E.

    2014-12-01

    Optimal actuator trajectories for an ITER hybrid scenario ramp-up are computed using a numerical optimization method. For both L-mode and H-mode scenarios, the time trajectory of plasma current, EC heating and current drive distribution is determined that minimizes a chosen cost function, while satisfying constraints. The cost function is formulated to reflect two desired properties of the plasma q profile at the end of the ramp-up. The first objective is to maximize the ITG turbulence threshold by maximizing the volume-averaged s/q ratio. The second objective is to achieve a stationary q profile by having a flat loop voltage profile. Actuator and physics-derived constraints are included, imposing limits on plasma current, ramp rates, internal inductance and q profile. This numerical method uses the fast control-oriented plasma profile evolution code RAPTOR, which is successfully benchmarked against more complete CRONOS simulations for L-mode and H-mode mode ITER hybrid scenarios. It is shown that the optimized trajectories computed using RAPTOR also result in an improved ramp-up scenario for CRONOS simulations using the same input trajectories. Furthermore, the optimal trajectories are shown to vary depending on the precise timing of the L-H transition.

  2. An improved 3D MoF method based on analytical partial derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2016-12-01

    MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.

  3. A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems.

    PubMed

    Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping

    2013-01-01

    Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

  4. Computation of optimal output-feedback compensators for linear time-invariant systems

    NASA Technical Reports Server (NTRS)

    Platzman, L. K.

    1972-01-01

    The control of linear time-invariant systems with respect to a quadratic performance criterion was considered, subject to the constraint that the control vector be a constant linear transformation of the output vector. The optimal feedback matrix, f*, was selected to optimize the expected performance, given the covariance of the initial state. It is first shown that the expected performance criterion can be expressed as the ratio of two multinomials in the element of f. This expression provides the basis for a feasible method of determining f* in the case of single-input single-output systems. A number of iterative algorithms are then proposed for the calculation of f* for multiple input-output systems. For two of these, monotone convergence is proved, but they involve the solution of nonlinear matrix equations at each iteration. Another is proposed involving the solution of Lyapunov equations at each iteration, and the gradual increase of the magnitude of a penalty function. Experience with this algorithm will be needed to determine whether or not it does, indeed, possess desirable convergence properties, and whether it can be used to determine the globally optimal f*.

  5. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the first two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-X510 network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  6. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-851O network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  7. Improving performances of suboptimal greedy iterative biclustering heuristics via localization.

    PubMed

    Erten, Cesim; Sözdinler, Melih

    2010-10-15

    Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.

  8. Stopping Criteria for Log-Domain Diffeomorphic Demons Registration: An Experimental Survey for Radiotherapy Application.

    PubMed

    Peroni, M; Golland, P; Sharp, G C; Baroni, G

    2016-02-01

    A crucial issue in deformable image registration is achieving a robust registration algorithm at a reasonable computational cost. Given the iterative nature of the optimization procedure an algorithm must automatically detect convergence, and stop the iterative process when most appropriate. This paper ranks the performances of three stopping criteria and six stopping value computation strategies for a Log-Domain Demons Deformable registration method simulating both a coarse and a fine registration. The analyzed stopping criteria are: (a) velocity field update magnitude, (b) mean squared error, and (c) harmonic energy. Each stoping condition is formulated so that the user defines a threshold ∊, which quantifies the residual error that is acceptable for the particular problem and calculation strategy. In this work, we did not aim at assigning a value to e, but to give insights in how to evaluate and to set the threshold on a given exit strategy in a very popular registration scheme. Experiments on phantom and patient data demonstrate that comparing the optimization metric minimum over the most recent three iterations with the minimum over the fourth to sixth most recent iterations can be an appropriate algorithm stopping strategy. The harmonic energy was found to provide best trade-off between robustness and speed of convergence for the analyzed registration method at coarse registration, but was outperformed by mean squared error when all the original pixel information is used. This suggests the need of developing mathematically sound new convergence criteria in which both image and vector field information could be used to detect the actual convergence, which could be especially useful when considering multi-resolution registrations. Further work should be also dedicated to study same strategies performances in other deformable registration methods and body districts. © The Author(s) 2014.

  9. Optimization and validation of accelerated golden-angle radial sparse MRI reconstruction with self-calibrating GRAPPA operator gridding.

    PubMed

    Benkert, Thomas; Tian, Ye; Huang, Chenchan; DiBella, Edward V R; Chandarana, Hersh; Feng, Li

    2018-07-01

    Golden-angle radial sparse parallel (GRASP) MRI reconstruction requires gridding and regridding to transform data between radial and Cartesian k-space. These operations are repeatedly performed in each iteration, which makes the reconstruction computationally demanding. This work aimed to accelerate GRASP reconstruction using self-calibrating GRAPPA operator gridding (GROG) and to validate its performance in clinical imaging. GROG is an alternative gridding approach based on parallel imaging, in which k-space data acquired on a non-Cartesian grid are shifted onto a Cartesian k-space grid using information from multicoil arrays. For iterative non-Cartesian image reconstruction, GROG is performed only once as a preprocessing step. Therefore, the subsequent iterative reconstruction can be performed directly in Cartesian space, which significantly reduces computational burden. Here, a framework combining GROG with GRASP (GROG-GRASP) is first optimized and then compared with standard GRASP reconstruction in 22 prostate patients. GROG-GRASP achieved approximately 4.2-fold reduction in reconstruction time compared with GRASP (∼333 min versus ∼78 min) while maintaining image quality (structural similarity index ≈ 0.97 and root mean square error ≈ 0.007). Visual image quality assessment by two experienced radiologists did not show significant differences between the two reconstruction schemes. With a graphics processing unit implementation, image reconstruction time can be further reduced to approximately 14 min. The GRASP reconstruction can be substantially accelerated using GROG. This framework is promising toward broader clinical application of GRASP and other iterative non-Cartesian reconstruction methods. Magn Reson Med 80:286-293, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Bounding the Spacecraft Atmosphere Design Space for Future Exploration Missions

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Perka, Alan T.; Duffield, Bruce E.; Jeng, Frank F.

    2005-01-01

    The selection of spacecraft and space suit atmospheres for future human space exploration missions will play an important, if not critical, role in the ultimate safety, productivity, and cost of such missions. Internal atmosphere pressure and composition (particularly oxygen concentration) influence many aspects of spacecraft and space suit design, operation, and technology development. Optimal atmosphere solutions must be determined by iterative process involving research, design, development, testing, and systems analysis. A necessary first step in this process is the establishment of working bounds on the atmosphere design space.

  11. Optimism as a Prior Belief about the Probability of Future Reward

    PubMed Central

    Kalra, Aditi; Seriès, Peggy

    2014-01-01

    Optimists hold positive a priori beliefs about the future. In Bayesian statistical theory, a priori beliefs can be overcome by experience. However, optimistic beliefs can at times appear surprisingly resistant to evidence, suggesting that optimism might also influence how new information is selected and learned. Here, we use a novel Pavlovian conditioning task, embedded in a normative framework, to directly assess how trait optimism, as classically measured using self-report questionnaires, influences choices between visual targets, by learning about their association with reward progresses. We find that trait optimism relates to an a priori belief about the likelihood of rewards, but not losses, in our task. Critically, this positive belief behaves like a probabilistic prior, i.e. its influence reduces with increasing experience. Contrary to findings in the literature related to unrealistic optimism and self-beliefs, it does not appear to influence the iterative learning process directly. PMID:24853098

  12. Optimization-based additive decomposition of weakly coercive problems with applications

    DOE PAGES

    Bochev, Pavel B.; Ridzal, Denis

    2016-01-27

    In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less

  13. Optimization and resilience in natural resources management

    USGS Publications Warehouse

    Williams, Byron K.; Johnson, Fred A.

    2015-01-01

    We consider the putative tradeoff between optimization and resilience in the management of natural resources, using a framework that incorporates different sources of uncertainty that are common in natural resources management. We address one-time decisions, and then expand the decision context to the more complex problem of iterative decision making. For both cases we focus on two key sources of uncertainty: partial observability of system state and uncertainty as to system dynamics. Optimal management strategies will vary considerably depending on the timeframe being considered and the amount and quality of information that is available to characterize system features and project the consequences of potential decisions. But in all cases an optimal decision making framework, if properly identified and focused, can be useful in recognizing sound decisions. We argue that under the conditions of deep uncertainty that characterize many resource systems, an optimal decision process that focuses on robustness does not automatically induce a loss of resilience.

  14. A quasi-dense matching approach and its calibration application with Internet photos.

    PubMed

    Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei

    2015-03-01

    This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.

  15. Regularization iteration imaging algorithm for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao

    2018-03-01

    The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.

  16. Error analysis in inverse scatterometry. I. Modeling.

    PubMed

    Al-Assaad, Rayan M; Byrne, Dale M

    2007-02-01

    Scatterometry is an optical technique that has been studied and tested in recent years in semiconductor fabrication metrology for critical dimensions. Previous work presented an iterative linearized method to retrieve surface-relief profile parameters from reflectance measurements upon diffraction. With the iterative linear solution model in this work, rigorous models are developed to represent the random and deterministic or offset errors in scatterometric measurements. The propagation of different types of error from the measurement data to the profile parameter estimates is then presented. The improvement in solution accuracies is then demonstrated with theoretical and experimental data by adjusting for the offset errors. In a companion paper (in process) an improved optimization method is presented to account for unknown offset errors in the measurements based on the offset error model.

  17. An Extension of the Krieger-Li-Iafrate Approximation to the Optimized-Effective-Potential Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, B.G.

    1999-11-11

    The Krieger-Li-Iafrate approximation can be expressed as the zeroth order result of an unstable iterative method for solving the integral equation form of the optimized-effective-potential method. By pre-conditioning the iterate a first order correction can be obtained which recovers the bulk of quantal oscillations missing in the zeroth order approximation. A comparison of calculated total energies are given with Krieger-Li-Iafrate, Local Density Functional, and Hyper-Hartree-Fock results for non-relativistic atoms and ions.

  18. Aerodynamic optimization by simultaneously updating flow variables and design parameters with application to advanced propeller designs

    NASA Technical Reports Server (NTRS)

    Rizk, Magdi H.

    1988-01-01

    A scheme is developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The scheme updates the design parameter iterative solutions and the flow variable iterative solutions simultaneously. It is applied to an advanced propeller design problem with the Euler equations used as the flow governing equations. The scheme's accuracy, efficiency and sensitivity to the computational parameters are tested.

  19. Sculpt test problem analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sweetser, John David

    2013-10-01

    This report details Sculpt's implementation from a user's perspective. Sculpt is an automatic hexahedral mesh generation tool developed at Sandia National Labs by Steve Owen. 54 predetermined test cases are studied while varying the input parameters (Laplace iterations, optimization iterations, optimization threshold, number of processors) and measuring the quality of the resultant mesh. This information is used to determine the optimal input parameters to use for an unknown input geometry. The overall characteristics are covered in Chapter 1. The speci c details of every case are then given in Appendix A. Finally, example Sculpt inputs are given in B.1 andmore » B.2.« less

  20. Guided particle swarm optimization method to solve general nonlinear optimization problems

    NASA Astrophysics Data System (ADS)

    Abdelhalim, Alyaa; Nakata, Kazuhide; El-Alem, Mahmoud; Eltawil, Amr

    2018-04-01

    The development of hybrid algorithms is becoming an important topic in the global optimization research area. This article proposes a new technique in hybridizing the particle swarm optimization (PSO) algorithm and the Nelder-Mead (NM) simplex search algorithm to solve general nonlinear unconstrained optimization problems. Unlike traditional hybrid methods, the proposed method hybridizes the NM algorithm inside the PSO to improve the velocities and positions of the particles iteratively. The new hybridization considers the PSO algorithm and NM algorithm as one heuristic, not in a sequential or hierarchical manner. The NM algorithm is applied to improve the initial random solution of the PSO algorithm and iteratively in every step to improve the overall performance of the method. The performance of the proposed method was tested over 20 optimization test functions with varying dimensions. Comprehensive comparisons with other methods in the literature indicate that the proposed solution method is promising and competitive.

  1. Acceleration of image-based resolution modelling reconstruction using an expectation maximization nested algorithm.

    PubMed

    Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2013-08-07

    Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.

  2. Modeling and Optimization of Multiple Unmanned Aerial Vehicles System Architecture Alternatives

    PubMed Central

    Wang, Weiping; He, Lei

    2014-01-01

    Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios. PMID:25140328

  3. Applying Evidence-Based Medicine in Telehealth: An Interactive Pattern Recognition Approximation

    PubMed Central

    Fernández-Llatas, Carlos; Meneu, Teresa; Traver, Vicente; Benedi, José-Miguel

    2013-01-01

    Born in the early nineteen nineties, evidence-based medicine (EBM) is a paradigm intended to promote the integration of biomedical evidence into the physicians daily practice. This paradigm requires the continuous study of diseases to provide the best scientific knowledge for supporting physicians in their diagnosis and treatments in a close way. Within this paradigm, usually, health experts create and publish clinical guidelines, which provide holistic guidance for the care for a certain disease. The creation of these clinical guidelines requires hard iterative processes in which each iteration supposes scientific progress in the knowledge of the disease. To perform this guidance through telehealth, the use of formal clinical guidelines will allow the building of care processes that can be interpreted and executed directly by computers. In addition, the formalization of clinical guidelines allows for the possibility to build automatic methods, using pattern recognition techniques, to estimate the proper models, as well as the mathematical models for optimizing the iterative cycle for the continuous improvement of the guidelines. However, to ensure the efficiency of the system, it is necessary to build a probabilistic model of the problem. In this paper, an interactive pattern recognition approach to support professionals in evidence-based medicine is formalized. PMID:24185841

  4. Topology Optimization for Reducing Additive Manufacturing Processing Distortions

    DTIC Science & Technology

    2017-12-01

    features that curl or warp under thermal load and are subsequently struck by the recoater blade /roller. Support structures act to wick heat away and...was run for 150 iterations. The material properties for all examples were Young’s modulus E = 1 GPa, Poisson’s ratio ν = 0.25, and thermal expansion...the element-birth model is significantly more computationally expensive for a full op- timization run . Consider, the computational complexity of a

  5. PS-FW: A Hybrid Algorithm Based on Particle Swarm and Fireworks for Global Optimization

    PubMed Central

    Chen, Shuangqing; Wei, Lixin; Guan, Bing

    2018-01-01

    Particle swarm optimization (PSO) and fireworks algorithm (FWA) are two recently developed optimization methods which have been applied in various areas due to their simplicity and efficiency. However, when being applied to high-dimensional optimization problems, PSO algorithm may be trapped in the local optima owing to the lack of powerful global exploration capability, and fireworks algorithm is difficult to converge in some cases because of its relatively low local exploitation efficiency for noncore fireworks. In this paper, a hybrid algorithm called PS-FW is presented, in which the modified operators of FWA are embedded into the solving process of PSO. In the iteration process, the abandonment and supplement mechanism is adopted to balance the exploration and exploitation ability of PS-FW, and the modified explosion operator and the novel mutation operator are proposed to speed up the global convergence and to avoid prematurity. To verify the performance of the proposed PS-FW algorithm, 22 high-dimensional benchmark functions have been employed, and it is compared with PSO, FWA, stdPSO, CPSO, CLPSO, FIPS, Frankenstein, and ALWPSO algorithms. Results show that the PS-FW algorithm is an efficient, robust, and fast converging optimization method for solving global optimization problems. PMID:29675036

  6. Noise removal in extended depth of field microscope images through nonlinear signal processing.

    PubMed

    Zahreddine, Ramzi N; Cormack, Robert H; Cogswell, Carol J

    2013-04-01

    Extended depth of field (EDF) microscopy, achieved through computational optics, allows for real-time 3D imaging of live cell dynamics. EDF is achieved through a combination of point spread function engineering and digital image processing. A linear Wiener filter has been conventionally used to deconvolve the image, but it suffers from high frequency noise amplification and processing artifacts. A nonlinear processing scheme is proposed which extends the depth of field while minimizing background noise. The nonlinear filter is generated via a training algorithm and an iterative optimizer. Biological microscope images processed with the nonlinear filter show a significant improvement in image quality and signal-to-noise ratio over the conventional linear filter.

  7. Optimization of lightweight structure and supporting bipod flexure for a space mirror.

    PubMed

    Chen, Yi-Cheng; Huang, Bo-Kai; You, Zhen-Ting; Chan, Chia-Yen; Huang, Ting-Ming

    2016-12-20

    This article presents an optimization process for integrated optomechanical design. The proposed optimization process for integrated optomechanical design comprises computer-aided drafting, finite element analysis (FEA), optomechanical transfer codes, and an optimization solver. The FEA was conducted to determine mirror surface deformation; then, deformed surface nodal data were transferred into Zernike polynomials through MATLAB optomechanical transfer codes to calculate the resulting optical path difference (OPD) and optical aberrations. To achieve an optimum design, the optimization iterations of the FEA, optomechanical transfer codes, and optimization solver were automatically connected through a self-developed Tcl script. Two examples of optimization design were illustrated in this research, namely, an optimum lightweight design of a Zerodur primary mirror with an outer diameter of 566 mm that is used in a spaceborne telescope and an optimum bipod flexure design that supports the optimum lightweight primary mirror. Finally, optimum designs were successfully accomplished in both examples, achieving a minimum peak-to-valley (PV) value for the OPD of the deformed optical surface. The simulated optimization results showed that (1) the lightweight ratio of the primary mirror increased from 56% to 66%; and (2) the PV value of the mirror supported by optimum bipod flexures in the horizontal position effectively decreased from 228 to 61 nm.

  8. Optimal control of a coupled partial and ordinary differential equations system for the assimilation of polarimetry Stokes vector measurements in tokamak free-boundary equilibrium reconstruction with application to ITER

    NASA Astrophysics Data System (ADS)

    Faugeras, Blaise; Blum, Jacques; Heumann, Holger; Boulbe, Cédric

    2017-08-01

    The modelization of polarimetry Faraday rotation measurements commonly used in tokamak plasma equilibrium reconstruction codes is an approximation to the Stokes model. This approximation is not valid for the foreseen ITER scenarios where high current and electron density plasma regimes are expected. In this work a method enabling the consistent resolution of the inverse equilibrium reconstruction problem in the framework of non-linear free-boundary equilibrium coupled to the Stokes model equation for polarimetry is provided. Using optimal control theory we derive the optimality system for this inverse problem. A sequential quadratic programming (SQP) method is proposed for its numerical resolution. Numerical experiments with noisy synthetic measurements in the ITER tokamak configuration for two test cases, the second of which is an H-mode plasma, show that the method is efficient and that the accuracy of the identification of the unknown profile functions is improved compared to the use of classical Faraday measurements.

  9. Migration without migraines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lines, L.; Burton, A.; Lu, H.X.

    Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less

  10. spsann - optimization of sample patterns using spatial simulated annealing

    NASA Astrophysics Data System (ADS)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a computationally intensive method. As such, many strategies were used to reduce the computation time and memory usage: a) bottlenecks were implemented in C++, b) a finite set of candidate locations is used for perturbing the sample points, and c) data matrices are computed only once and then updated at each iteration instead of being recomputed. spsann is available at GitHub under a licence GLP Version 2.0 and will be further developed to: a) allow the use of a cost surface, b) implement other sensitive parts of the source code in C++, c) implement other optimizing criteria, d) allow to add or delete points to/from an existing point pattern.

  11. Biotechnology and genetic engineering in the new drug development. Part III. Biocatalysis, metabolic engineering and molecular modelling.

    PubMed

    Stryjewska, Agnieszka; Kiepura, Katarzyna; Librowski, Tadeusz; Lochyński, Stanisław

    2013-01-01

    Industrial biotechnology has been defined as the use and application of biotechnology for the sustainable processing and production of chemicals, materials and fuels. It makes use of biocatalysts such as microbial communities, whole-cell microorganisms or purified enzymes. In the review these processes are described. Drug design is an iterative process which begins when a chemist identifies a compound that displays an interesting biological profile and ends when both the activity profile and the chemical synthesis of the new chemical entity are optimized. Traditional approaches to drug discovery rely on a stepwise synthesis and screening program for large numbers of compounds to optimize activity profiles. Over the past ten to twenty years, scientists have used computer models of new chemical entities to help define activity profiles, geometries and relativities. This article introduces inter alia the concepts of molecular modelling and contains references for further reading.

  12. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  13. Optimized Dose Distribution of Gammamed Plus Vaginal Cylinders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Supe, Sanjay S.; Bijina, T.K.; Varatharaj, C.

    2009-04-01

    Endometrial carcinoma is the most common malignancy arising in the female genital tract. Intracavitary vaginal cuff irradiation may be given alone or with external beam irradiation in patients determined to be at risk for locoregional recurrence. Vaginal cylinders are often used to deliver a brachytherapy dose to the vaginal apex and upper vagina or the entire vaginal surface in the management of postoperative endometrial cancer or cervical cancer. The dose distributions of HDR vaginal cylinders must be evaluated carefully, so that clinical experiences with LDR techniques can be used in guiding optimal use of HDR techniques. The aim of thismore » study was to optimize dose distribution for Gammamed plus vaginal cylinders. Placement of dose optimization points was evaluated for its effect on optimized dose distributions. Two different dose optimization point models were used in this study, namely non-apex (dose optimization points only on periphery of cylinder) and apex (dose optimization points on periphery and along the curvature including the apex points). Thirteen dwell positions were used for the HDR dosimetry to obtain a 6-cm active length. Thus 13 optimization points were available at the periphery of the cylinder. The coordinates of the points along the curvature depended on the cylinder diameters and were chosen for each cylinder so that four points were distributed evenly in the curvature portion of the cylinder. Diameter of vaginal cylinders varied from 2.0 to 4.0 cm. Iterative optimization routine was utilized for all optimizations. The effects of various optimization routines (iterative, geometric, equal times) was studied for the 3.0-cm diameter vaginal cylinder. The effect of source travel step size on the optimized dose distributions for vaginal cylinders was also evaluated. All optimizations in this study were carried for dose of 6 Gy at dose optimization points. For both non-apex and apex models of vaginal cylinders, doses for apex point and three dome points were higher for the apex model compared with the non-apex model. Mean doses to the optimization points for both the cylinder models and all the cylinder diameters were 6 Gy, matching with the prescription dose of 6 Gy. Iterative optimization routine resulted in the highest dose to apex point and dome points. The mean dose for optimization point was 6.01 Gy for iterative optimization and was much higher than 5.74 Gy for geometric and equal times routines. Step size of 1 cm gave the highest dose to the apex point. This step size was superior in terms of mean dose to optimization points. Selection of dose optimization points for the derivation of optimized dose distributions for vaginal cylinders affects the dose distributions.« less

  14. A novel channel selection method for optimal classification in different motor imagery BCI paradigms.

    PubMed

    Shan, Haijun; Xu, Haojie; Zhu, Shanan; He, Bin

    2015-10-21

    For sensorimotor rhythms based brain-computer interface (BCI) systems, classification of different motor imageries (MIs) remains a crucial problem. An important aspect is how many scalp electrodes (channels) should be used in order to reach optimal performance classifying motor imaginations. While the previous researches on channel selection mainly focus on MI tasks paradigms without feedback, the present work aims to investigate the optimal channel selection in MI tasks paradigms with real-time feedback (two-class control and four-class control paradigms). In the present study, three datasets respectively recorded from MI tasks experiment, two-class control and four-class control experiments were analyzed offline. Multiple frequency-spatial synthesized features were comprehensively extracted from every channel, and a new enhanced method IterRelCen was proposed to perform channel selection. IterRelCen was constructed based on Relief algorithm, but was enhanced from two aspects: change of target sample selection strategy and adoption of the idea of iterative computation, and thus performed more robust in feature selection. Finally, a multiclass support vector machine was applied as the classifier. The least number of channels that yield the best classification accuracy were considered as the optimal channels. One-way ANOVA was employed to test the significance of performance improvement among using optimal channels, all the channels and three typical MI channels (C3, C4, Cz). The results show that the proposed method outperformed other channel selection methods by achieving average classification accuracies of 85.2, 94.1, and 83.2 % for the three datasets, respectively. Moreover, the channel selection results reveal that the average numbers of optimal channels were significantly different among the three MI paradigms. It is demonstrated that IterRelCen has a strong ability for feature selection. In addition, the results have shown that the numbers of optimal channels in the three different motor imagery BCI paradigms are distinct. From a MI task paradigm, to a two-class control paradigm, and to a four-class control paradigm, the number of required channels for optimizing the classification accuracy increased. These findings may provide useful information to optimize EEG based BCI systems, and further improve the performance of noninvasive BCI.

  15. Batch, design optimization, and DNA sequencing study for continuous 1,3-propanediol production from waste glycerol by a soil-based inoculum.

    PubMed

    Kanjilal, Baishali; Noshadi, Iman; Bautista, Eddy J; Srivastava, Ranjan; Parnas, Richard S

    2015-03-01

    1,3-propanediol (1,3-PD) was produced with a robust fermentation process using waste glycerol feedstock from biodiesel production and a soil-based bacterial inoculum. An iterative inoculation method was developed to achieve independence from soil and selectively breed bacterial populations capable of glycerol metabolism to 1,3-PD. The inoculum showed high resistance to impurities in the feedstock. 1,3-PD selectivity and yield in batch fermentations was optimized by appropriate nutrient compositions and pH control. The batch yield of 1,3-PD was maximized to ~0.7 mol/mol for industrial glycerol which was higher than that for pure glycerin. 16S rDNA sequencing results show a systematic selective enrichment of 1,3-PD producing bacteria with iterative inoculation and subsequent process control. A statistical design of experiments was carried out on industrial glycerol batches to optimize conditions, which were used to run two continuous flow stirred-tank reactor (CSTR) experiments over a period of >500 h each. A detailed analysis of steady states at three dilution rates is presented. Enhanced specific 1,3-PD productivity was observed with faster dilution rates due to lower levels of solvent degeneration. 1,3-PD productivity, specific productivity, and yield of 1.1 g/l hr, 1.5 g/g hr, and 0.6 mol/mol of glycerol were obtained at a dilution rate of 0.1 h(-1)which is bettered only by pure strains in pure glycerin feeds.

  16. Comparisons of Observed Process Quality in German and American Infant/Toddler Programs

    ERIC Educational Resources Information Center

    Tietze, Wolfgang; Cryer, Debby

    2004-01-01

    Observed process quality in infant/toddler classrooms was compared in Germany (n = 75) and the USA (n = 219). Process quality was assessed with the Infant/Toddler Environment Rating Scale(ITERS) and parent attitudes about ITERS content with the ITERS Parent Questionnaire (ITERSPQ). The ITERS had comparable reliabilities in the two countries and…

  17. LOW-ENGINE-FRICTION TECHNOLOGY FOR ADVANCED NATURAL-GAS RECIPROCATING ENGINES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Victor Wong; Tian Tian; Luke Moughon

    2005-09-30

    This program aims at improving the efficiency of advanced natural-gas reciprocating engines (ANGRE) by reducing piston and piston ring assembly friction without major adverse effects on engine performance, such as increased oil consumption and wear. An iterative process of simulation, experimentation and analysis is being followed towards achieving the goal of demonstrating a complete optimized low-friction engine system. To date, a detailed set of piston and piston-ring dynamic and friction models have been developed and applied that illustrate the fundamental relationships between design parameters and friction losses. Low friction ring designs have already been recommended in a previous phase, withmore » full-scale engine validation partially completed. Current accomplishments include the addition of several additional power cylinder design areas to the overall system analysis. These include analyses of lubricant and cylinder surface finish and a parametric study of piston design. The Waukesha engine was found to be already well optimized in the areas of lubricant, surface skewness and honing cross-hatch angle, where friction reductions of 12% for lubricant, and 5% for surface characteristics, are projected. For the piston, a friction reduction of up to 50% may be possible by controlling waviness alone, while additional friction reductions are expected when other parameters are optimized. A total power cylinder friction reduction of 30-50% is expected, translating to an engine efficiency increase of two percentage points from its current baseline towards the goal of 50% efficiency. Key elements of the continuing work include further analysis and optimization of the engine piston design, in-engine testing of recommended lubricant and surface designs, design iteration and optimization of previously recommended technologies, and full-engine testing of a complete, optimized, low-friction power cylinder system.« less

  18. On the breakdown modes and parameter space of Ohmic Tokamak startup

    NASA Astrophysics Data System (ADS)

    Peng, Yanli; Jiang, Wei; Zhang, Ya; Hu, Xiwei; Zhuang, Ge; Innocenti, Maria; Lapenta, Giovanni

    2017-10-01

    Tokamak plasma has to be hot. The process of turning the initial dilute neutral hydrogen gas at room temperature into fully ionized plasma is called tokamak startup. Even with over 40 years of research, the parameter ranges for the successful startup still aren't determined by numerical simulations but by trial and errors. However, in recent years it has drawn much attention due to one of the challenges faced by ITER: the maximum electric field for startup can't exceed 0.3 V/m, which makes the parameter range for successful startup narrower. Besides, this physical mechanism is far from being understood either theoretically or numerically. In this work, we have simulated the plasma breakdown phase driven by pure Ohmic heating using a particle-in-cell/Monte Carlo code, with the aim of giving a predictive parameter range for most tokamaks, even for ITER. We have found three situations during the discharge, as a function of the initial parameters: no breakdown, breakdown and runaway. Moreover, breakdown delay and volt-second consumption under different initial conditions are evaluated. In addition, we have simulated breakdown on ITER and confirmed that when the electric field is 0.3 V/m, the optimal pre-filling pressure is 0.001 Pa, which is in good agreement with ITER's design.

  19. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    PubMed

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  20. RAMI Analysis for Designing and Optimizing Tokamak Cooling Water System (TCWS) for the ITER's Fusion Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrada, Juan J; Reiersen, Wayne T

    U.S.-ITER is responsible for the design, engineering, and procurement of the Tokamak Cooling Water System (TCWS). TCWS is designed to provide cooling and baking for client systems that include the first wall/blanket, vacuum vessel, divertor, and neutral beam injector. Additional operations that support these primary functions include chemical control of water provided to client systems, draining and drying for maintenance, and leak detection/localization. TCWS interfaces with 27 systems including the secondary cooling system, which rejects this heat to the environment. TCWS transfers heat generated in the Tokamak during nominal pulsed operation - 850 MW at up to 150 C andmore » 4.2 MPa water pressure. Impurities are diffused from in-vessel components and the vacuum vessel by water baking at 200-240 C at up to 4.4 MPa. TCWS is complex because it serves vital functions for four primary clients whose performance is critical to ITER's success and interfaces with more than 20 additional ITER systems. Conceptual design of this one-of-a-kind cooling system has been completed; however, several issues remain that must be resolved before moving to the next stage of the design process. The 2004 baseline design indicated cooling loops that have no fault tolerance for component failures. During plasma operation, each cooling loop relies on a single pump, a single pressurizer, and one heat exchanger. Consequently, failure of any of these would render TCWS inoperable, resulting in plasma shutdown. The application of reliability, availability, maintainability, and inspectability (RAMI) tools during the different stages of TCWS design is crucial for optimization purposes and for maintaining compliance with project requirements. RAMI analysis will indicate appropriate equipment redundancy that provides graceful degradation in the event of an equipment failure. This analysis helps demonstrate that using proven, commercially available equipment is better than using custom-designed equipment with no field experience and lowers specific costs while providing higher reliability. This paper presents a brief description of the TCWS conceptual design and the application of RAMI tools to optimize the design at different stages during the project.« less

  1. Tomographic image reconstruction using the cell broadband engine (CBE) general purpose hardware

    NASA Astrophysics Data System (ADS)

    Knaup, Michael; Steckmann, Sven; Bockenbach, Olivier; Kachelrieß, Marc

    2007-02-01

    Tomographic image reconstruction, such as the reconstruction of CT projection values, of tomosynthesis data, PET or SPECT events, is computational very demanding. In filtered backprojection as well as in iterative reconstruction schemes, the most time-consuming steps are forward- and backprojection which are often limited by the memory bandwidth. Recently, a novel general purpose architecture optimized for distributed computing became available: the Cell Broadband Engine (CBE). Its eight synergistic processing elements (SPEs) currently allow for a theoretical performance of 192 GFlops (3 GHz, 8 units, 4 floats per vector, 2 instructions, multiply and add, per clock). To maximize image reconstruction speed we modified our parallel-beam and perspective backprojection algorithms which are highly optimized for standard PCs, and optimized the code for the CBE processor. 1-3 In addition, we implemented an optimized perspective forwardprojection on the CBE which allows us to perform statistical image reconstructions like the ordered subset convex (OSC) algorithm. 4 Performance was measured using simulated data with 512 projections per rotation and 5122 detector elements. The data were backprojected into an image of 512 3 voxels using our PC-based approaches and the new CBE- based algorithms. Both the PC and the CBE timings were scaled to a 3 GHz clock frequency. On the CBE, we obtain total reconstruction times of 4.04 s for the parallel backprojection, 13.6 s for the perspective backprojection and 192 s for a complete OSC reconstruction, consisting of one initial Feldkamp reconstruction, followed by 4 OSC iterations.

  2. Scalable Evaluation of Polarization Energy and Associated Forces in Polarizable Molecular Dynamics: II.Towards Massively Parallel Computations using Smooth Particle Mesh Ewald.

    PubMed

    Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip

    2014-02-28

    In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations.

  3. Scalable Evaluation of Polarization Energy and Associated Forces in Polarizable Molecular Dynamics: II.Towards Massively Parallel Computations using Smooth Particle Mesh Ewald

    PubMed Central

    Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip

    2015-01-01

    In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations. PMID:26512230

  4. Development of a laser cleaning method for the first mirror surface of the charge exchange recombination spectroscopy diagnostics on ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuznetsov, A. P., E-mail: APKuznetsov@mephi.ru; Buzinskij, O. I.; Gubsky, K. L.

    A set of optical diagnostics is expected for measuring the plasma characteristics in ITER. Optical elements located inside discharge chambers are exposed to an intense radiation load, sputtering due to collisions with energetic atoms formed in the charge transfer processes, and contamination due to recondensation of materials sputtered from different parts of the construction of the chamber. Removing the films of the sputtered materials from the mirrors with the aid of pulsed laser radiation is an efficient cleaning method enabling recovery of the optical properties of the mirrors. In this work, we studied the efficiency of removal of metal oxidemore » films by pulsed radiation of a fiber laser. Optimization of the laser cleaning conditions was carried out on samples representing metal substrates polished with optical quality with deposition of films on them imitating the chemical composition and conditions expected in ITER. It is shown that, by a proper selection of modes of radiation exposure to the surface with a deposited film, it is feasible to restore the original high reflection characteristics of optical elements.« less

  5. The benefits of adaptive parametrization in multi-objective Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ghisu, Tiziano; Parks, Geoffrey T.; Jaeggi, Daniel M.; Jarrett, Jerome P.; Clarkson, P. John

    2010-10-01

    In real-world optimization problems, large design spaces and conflicting objectives are often combined with a large number of constraints, resulting in a highly multi-modal, challenging, fragmented landscape. The local search at the heart of Tabu Search, while being one of its strengths in highly constrained optimization problems, requires a large number of evaluations per optimization step. In this work, a modification of the pattern search algorithm is proposed: this modification, based on a Principal Components' Analysis of the approximation set, allows both a re-alignment of the search directions, thereby creating a more effective parametrization, and also an informed reduction of the size of the design space itself. These changes make the optimization process more computationally efficient and more effective - higher quality solutions are identified in fewer iterations. These advantages are demonstrated on a number of standard analytical test functions (from the ZDT and DTLZ families) and on a real-world problem (the optimization of an axial compressor preliminary design).

  6. Steady-state configuration and tension calculations of marine cables under complex currents via separated particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Xu, Xue-song

    2014-12-01

    Under complex currents, the motion governing equations of marine cables are complex and nonlinear, and the calculations of cable configuration and tension become difficult compared with those under the uniform or simple currents. To obtain the numerical results, the usual Newton-Raphson iteration is often adopted, but its stability depends on the initial guessed solution to the governing equations. To improve the stability of numerical calculation, this paper proposed separated the particle swarm optimization, in which the variables are separated into several groups, and the dimension of search space is reduced to facilitate the particle swarm optimization. Via the separated particle swarm optimization, these governing nonlinear equations can be solved successfully with any initial solution, and the process of numerical calculation is very stable. For the calculations of cable configuration and tension of marine cables under complex currents, the proposed separated swarm particle optimization is more effective than the other particle swarm optimizations.

  7. An Improved Quantum-Behaved Particle Swarm Optimization Algorithm with Elitist Breeding for Unconstrained Optimization.

    PubMed

    Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing

    2015-01-01

    An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.

  8. Mission and system optimization of nuclear electric propulsion vehicles for lunar and Mars missions

    NASA Technical Reports Server (NTRS)

    Gilland, James H.

    1991-01-01

    The detailed mission and system optimization of low thrust electric propulsion missions is a complex, iterative process involving interaction between orbital mechanics and system performance. Through the use of appropriate approximations, initial system optimization and analysis can be performed for a range of missions. The intent of these calculations is to provide system and mission designers with simple methods to assess system design without requiring access or detailed knowledge of numerical calculus of variations optimizations codes and methods. Approximations for the mission/system optimization of Earth orbital transfer and Mars mission have been derived. Analyses include the variation of thruster efficiency with specific impulse. Optimum specific impulse, payload fraction, and power/payload ratios are calculated. The accuracy of these methods is tested and found to be reasonable for initial scoping studies. Results of optimization for Space Exploration Initiative lunar cargo and Mars missions are presented for a range of power system and thruster options.

  9. Design tool for multiprocessor scheduling and evaluation of iterative dataflow algorithms

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1995-01-01

    A graph-theoretic design process and software tool is defined for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. Graph-search algorithms and analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool applies the design process to a given problem and includes performance optimization through the inclusion of additional precedence constraints among the schedulable tasks.

  10. Method and Process Development of Advanced Atmospheric Plasma Spraying for Thermal Barrier Coatings

    NASA Astrophysics Data System (ADS)

    Mihm, Sebastian; Duda, Thomas; Gruner, Heiko; Thomas, Georg; Dzur, Birger

    2012-06-01

    Over the last few years, global economic growth has triggered a dramatic increase in the demand for resources, resulting in steady rise in prices for energy and raw materials. In the gas turbine manufacturing sector, process optimizations of cost-intensive production steps involve a heightened potential of savings and form the basis for securing future competitive advantages in the market. In this context, the atmospheric plasma spraying (APS) process for thermal barrier coatings (TBC) has been optimized. A constraint for the optimization of the APS coating process is the use of the existing coating equipment. Furthermore, the current coating quality and characteristics must not change so as to avoid new qualification and testing. Using experience in APS and empirically gained data, the process optimization plan included the variation of e.g. the plasma gas composition and flow-rate, the electrical power, the arrangement and angle of the powder injectors in relation to the plasma jet, the grain size distribution of the spray powder and the plasma torch movement procedures such as spray distance, offset and iteration. In particular, plasma properties (enthalpy, velocity and temperature), powder injection conditions (injection point, injection speed, grain size and distribution) and the coating lamination (coating pattern and spraying distance) are examined. The optimized process and resulting coating were compared to the current situation using several diagnostic methods. The improved process significantly reduces costs and achieves the requirement of comparable coating quality. Furthermore, a contribution was made towards better comprehension of the APS of ceramics and the definition of a better method for future process developments.

  11. Accelerated wavefront determination technique for optical imaging through scattering medium

    NASA Astrophysics Data System (ADS)

    He, Hexiang; Wong, Kam Sing

    2016-03-01

    Wavefront shaping applied on scattering light is a promising optical imaging method in biological systems. Normally, optimized modulation can be obtained by a Liquid-Crystal Spatial Light Modulator (LC-SLM) and CCD hardware iteration. Here we introduce an improved method for this optimization process. The core of the proposed method is to firstly detect the disturbed wavefront, and then to calculate the modulation phase pattern by computer simulation. In particular, phase retrieval method together with phase conjugation is most effective. In this way, the LC-SLM based system can complete the wavefront optimization and imaging restoration within several seconds which is two orders of magnitude faster than the conventional technique. The experimental results show good imaging quality and may contribute to real time imaging recovery in scattering medium.

  12. Springback effects during single point incremental forming: Optimization of the tool path

    NASA Astrophysics Data System (ADS)

    Giraud-Moreau, Laurence; Belchior, Jérémy; Lafon, Pascal; Lotoing, Lionel; Cherouat, Abel; Courtielle, Eric; Guines, Dominique; Maurine, Patrick

    2018-05-01

    Incremental sheet forming is an emerging process to manufacture sheet metal parts. This process is more flexible than conventional one and well suited for small batch production or prototyping. During the process, the sheet metal blank is clamped by a blank-holder and a small-size smooth-end hemispherical tool moves along a user-specified path to deform the sheet incrementally. Classical three-axis CNC milling machines, dedicated structure or serial robots can be used to perform the forming operation. Whatever the considered machine, large deviations between the theoretical shape and the real shape can be observed after the part unclamping. These deviations are due to both the lack of stiffness of the machine and residual stresses in the part at the end of the forming stage. In this paper, an optimization strategy of the tool path is proposed in order to minimize the elastic springback induced by residual stresses after unclamping. A finite element model of the SPIF process allowing the shape prediction of the formed part with a good accuracy is defined. This model, based on appropriated assumptions, leads to calculation times which remain compatible with an optimization procedure. The proposed optimization method is based on an iterative correction of the tool path. The efficiency of the method is shown by an improvement of the final shape.

  13. Learning-Based Adaptive Optimal Tracking Control of Strict-Feedback Nonlinear Systems.

    PubMed

    Gao, Weinan; Jiang, Zhong-Ping; Weinan Gao; Zhong-Ping Jiang; Gao, Weinan; Jiang, Zhong-Ping

    2018-06-01

    This paper proposes a novel data-driven control approach to address the problem of adaptive optimal tracking for a class of nonlinear systems taking the strict-feedback form. Adaptive dynamic programming (ADP) and nonlinear output regulation theories are integrated for the first time to compute an adaptive near-optimal tracker without any a priori knowledge of the system dynamics. Fundamentally different from adaptive optimal stabilization problems, the solution to a Hamilton-Jacobi-Bellman (HJB) equation, not necessarily a positive definite function, cannot be approximated through the existing iterative methods. This paper proposes a novel policy iteration technique for solving positive semidefinite HJB equations with rigorous convergence analysis. A two-phase data-driven learning method is developed and implemented online by ADP. The efficacy of the proposed adaptive optimal tracking control methodology is demonstrated via a Van der Pol oscillator with time-varying exogenous signals.

  14. Adaptive dynamic programming for finite-horizon optimal control of discrete-time nonlinear systems with ε-error bound.

    PubMed

    Wang, Fei-Yue; Jin, Ning; Liu, Derong; Wei, Qinglai

    2011-01-01

    In this paper, we study the finite-horizon optimal control problem for discrete-time nonlinear systems using the adaptive dynamic programming (ADP) approach. The idea is to use an iterative ADP algorithm to obtain the optimal control law which makes the performance index function close to the greatest lower bound of all performance indices within an ε-error bound. The optimal number of control steps can also be obtained by the proposed ADP algorithms. A convergence analysis of the proposed ADP algorithms in terms of performance index function and control policy is made. In order to facilitate the implementation of the iterative ADP algorithms, neural networks are used for approximating the performance index function, computing the optimal control policy, and modeling the nonlinear system. Finally, two simulation examples are employed to illustrate the applicability of the proposed method.

  15. ANN-PSO Integrated Optimization Methodology for Intelligent Control of MMC Machining

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, Muthumari; Tamang, Santosh

    2017-08-01

    Metal Matrix Composites (MMC) show improved properties in comparison with non-reinforced alloys and have found increased application in automotive and aerospace industries. The selection of optimum machining parameters to produce components of desired surface roughness is of great concern considering the quality and economy of manufacturing process. In this study, a surface roughness prediction model for turning Al-SiCp MMC is developed using Artificial Neural Network (ANN). Three turning parameters viz., spindle speed ( N), feed rate ( f) and depth of cut ( d) were considered as input neurons and surface roughness was an output neuron. ANN architecture having 3 -5 -1 is found to be optimum and the model predicts with an average percentage error of 7.72 %. Particle Swarm Optimization (PSO) technique is used for optimizing parameters to minimize machining time. The innovative aspect of this work is the development of an integrated ANN-PSO optimization method for intelligent control of MMC machining process applicable to manufacturing industries. The robustness of the method shows its superiority for obtaining optimum cutting parameters satisfying desired surface roughness. The method has better convergent capability with minimum number of iterations.

  16. Towards inverse modeling of turbidity currents: The inverse lock-exchange problem

    NASA Astrophysics Data System (ADS)

    Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison

    2011-04-01

    A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.

  17. Distributed Blowing and Suction for the Purpose of Streak Control in a Boundary Layer Subjected to a Favorable Pressure Gradient

    NASA Technical Reports Server (NTRS)

    Forgoston, Eric; Tumin, Anatoli; Ashpis, David E.

    2005-01-01

    An analysis of the optimal control by blowing and suction in order to generate stream- wise velocity streaks is presented. The problem is examined using an iterative process that employs the Parabolized Stability Equations for an incompressible uid along with its adjoint equations. In particular, distributions of blowing and suction are computed for both the normal and tangential velocity perturbations for various choices of parameters.

  18. Coupled Research in Ocean Acoustics and Signal Processing for the Next Generation of Underwater Acoustic Communication Systems

    DTIC Science & Technology

    2016-08-05

    technique which used unobserved ”intermediate” variables to break a high-dimensional estimation problem such as least- squares (LS) optimization of a large...Least Squares (GEM-LS). The estimator is iterative and the work in this time period focused on characterizing the convergence properties of this...ap- proach by relaxing the statistical assumptions which is termed the Relaxed Approximate Graph-Structured Recursive Least Squares (RAGS-RLS). This

  19. Optimal Topology Control and Power Allocation for Minimum Energy Consumption in Consensus Networks

    DTIC Science & Technology

    2011-12-16

    network topologies, such as small world graphs, can greatly increase the convergence rate. In [9], the authors show that nonbipartite Ramanujan graphs...unclassified c . THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 23384 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60...of iterations necessary to achieve consensus. From this perspec- tive, enforcing a small world, scale-free, or Ramanujan graph topology may not be the

  20. Reinforcement learning for partially observable dynamic processes: adaptive dynamic programming using measured output data.

    PubMed

    Lewis, F L; Vamvoudakis, Kyriakos G

    2011-02-01

    Approximate dynamic programming (ADP) is a class of reinforcement learning methods that have shown their importance in a variety of applications, including feedback control of dynamical systems. ADP generally requires full information about the system internal states, which is usually not available in practical situations. In this paper, we show how to implement ADP methods using only measured input/output data from the system. Linear dynamical systems with deterministic behavior are considered herein, which are systems of great interest in the control system community. In control system theory, these types of methods are referred to as output feedback (OPFB). The stochastic equivalent of the systems dealt with in this paper is a class of partially observable Markov decision processes. We develop both policy iteration and value iteration algorithms that converge to an optimal controller that requires only OPFB. It is shown that, similar to Q -learning, the new methods have the important advantage that knowledge of the system dynamics is not needed for the implementation of these learning algorithms or for the OPFB control. Only the order of the system, as well as an upper bound on its "observability index," must be known. The learned OPFB controller is in the form of a polynomial autoregressive moving-average controller that has equivalent performance with the optimal state variable feedback gain.

  1. The development of Drink Less: an alcohol reduction smartphone app for excessive drinkers.

    PubMed

    Garnett, Claire; Crane, David; West, Robert; Brown, Jamie; Michie, Susan

    2018-05-04

    Excessive alcohol consumption poses a serious problem for public health. Digital behavior change interventions have the potential to help users reduce their drinking. In accordance with Open Science principles, this paper describes the development of a smartphone app to help individuals who drink excessively to reduce their alcohol consumption. Following the UK Medical Research Council's guidance and the Multiphase Optimization Strategy, development consisted of two phases: (i) selection of intervention components and (ii) design and development work to implement the chosen components into modules to be evaluated further for inclusion in the app. Phase 1 involved a scoping literature review, expert consensus study and content analysis of existing alcohol apps. Findings were integrated within a broad model of behavior change (Capability, Opportunity, Motivation-Behavior). Phase 2 involved a highly iterative process and used the "Person-Based" approach to promote engagement. From Phase 1, five intervention components were selected: (i) Normative Feedback, (ii) Cognitive Bias Re-training, (iii) Self-monitoring and Feedback, (iv) Action Planning, and (v) Identity Change. Phase 2 indicated that each of these components presented different challenges for implementation as app modules; all required multiple iterations and design changes to arrive at versions that would be suitable for inclusion in a subsequent evaluation study. The development of the Drink Less app involved a thorough process of component identification with a scoping literature review, expert consensus, and review of other apps. Translation of the components into app modules required a highly iterative process involving user testing and design modification.

  2. Application of numerical optimization techniques to control system design for nonlinear dynamic models of aircraft

    NASA Technical Reports Server (NTRS)

    Lan, C. Edward; Ge, Fuying

    1989-01-01

    Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.

  3. The meaning of death: some simulations of a model of healthy and unhealthy consumption.

    PubMed

    Forster, M

    2001-07-01

    Simulations of a model of healthy and unhealthy consumption are used to investigate the impact of various terminal conditions on life-span, pathways of health-related consumption and health. A model in which life-span and the 'death' stock of health are fixed is compared to versions in which (i) the 'death' stock of health is freely chosen; (ii) life-span is freely chosen; (iii) both the 'death' stock of health and life-span are freely chosen. The choice of terminal conditions has a striking impact on optimal plans. Results are discussed with reference to the existing demand for health literature and illustrate the application of iterative processes to determine optimal life-span, the role played by the marginal value of health capital in determining optimal plans, and the importance of checking the second-order conditions for the optimal choice of life-span.

  4. CONORBIT: constrained optimization by radial basis function interpolation in trust regions

    DOE PAGES

    Regis, Rommel G.; Wild, Stefan M.

    2016-09-26

    Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less

  5. Technical Note: FreeCT_ICD: An Open Source Implementation of a Model-Based Iterative Reconstruction Method using Coordinate Descent Optimization for CT Imaging Investigations.

    PubMed

    Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael

    2018-06-01

    To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric thoracic scan. For the ACR phantom, image quality was comparable to clinical reconstructions as well as reconstructions using open-source FreeCT_wFBP software. The pediatric thoracic scan also yielded acceptable results. In addition, we did not observe any deleterious impact in image quality associated with the utilization of rotating slices. These evaluations also demonstrated reasonable tradeoffs in storage requirements and computational demands. FreeCT_ICD is an open-source implementation of a model-based iterative reconstruction method that extends the capabilities of previously released open source reconstruction software and provides the ability to perform vendor-independent reconstructions of clinically acquired raw projection data. This implementation represents a reasonable tradeoff between storage and computational requirements and has demonstrated acceptable image quality in both simulated and clinical image datasets. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  6. Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.

    PubMed

    Skariah, Deepak G; Arigovindan, Muthuvel

    2017-06-19

    We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.

  7. A Review on Medical Image Registration as an Optimization Problem

    PubMed Central

    Song, Guoli; Han, Jianda; Zhao, Yiwen; Wang, Zheng; Du, Huibin

    2017-01-01

    Objective: In the course of clinical treatment, several medical media are required by a phy-sician in order to provide accurate and complete information about a patient. Medical image registra-tion techniques can provide a richer diagnosis and treatment information to doctors and to provide a comprehensive reference source for the researchers involved in image registration as an optimization problem. Methods: The essence of image registration is associating two or more different images spatial asso-ciation, and getting the translation of their spatial relationship. For medical image registration, its pro-cess is not absolute. Its core purpose is finding the conversion relationship between different images. Result: The major step of image registration includes the change of geometrical dimensions, and change of the image of the combination, image similarity measure, iterative optimization and interpo-lation process. Conclusion: The contribution of this review is sort of related image registration research methods, can provide a brief reference for researchers about image registration. PMID:28845149

  8. Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Lu, Ping

    2016-01-01

    Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from the ground control. The propellant optimal control problem in this work is to determine the optimal finite thrust vector to land the spacecraft at a specified location, in the presence of a highly nonlinear gravity field, subject to various mission and operational constraints. The proposed solution uses convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process for a fixed final time problem. In addition, a second optimization method is wrapped around the convex optimization problem to determine the optimal flight time that yields the lowest propellant usage over all flight times. Gravity models designed for irregularly shaped asteroids are investigated. Success of the algorithm is demonstrated by designing powered descent trajectories for the elongated binary asteroid Castalia.

  9. Modeling of frequency-domain scalar wave equation with the average-derivative optimal scheme based on a multigrid-preconditioned iterative solver

    NASA Astrophysics Data System (ADS)

    Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue

    2018-01-01

    An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.

  10. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-07

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation time including both MC dose calculations and plan optimizations was reduced by a factor of 4.4, from 494 to 113 s, using only one GPU card.

  11. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  12. Effective use of integrated hydrological models in basin-scale water resources management: surrogate modeling approaches

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Wu, B.; Wu, X.

    2015-12-01

    Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real-world situations, since they can dramatically reduce the computational cost of using IHMs in an iterative model evaluation process. In addition, our studies generated insights into the human-nature water conflicts in the specific study area and suggested potential solutions to address them.

  13. The Battlefield Environment Division Modeling Framework (BMF). Part 1: Optimizing the Atmospheric Boundary Layer Environment Model for Cluster Computing

    DTIC Science & Technology

    2014-02-01

    idle waiting for the wavefront to reach it. To overcome this, Reeve et al. (2001) 3 developed a scheme in analogy to the red-black Gauss - Seidel iterative ...understandable procedure calls. Parallelization of the SIMPLE iterative scheme with SIP used a red-black scheme similar to the red-black Gauss - Seidel ...scheme, the SIMPLE method, for pressure-velocity coupling. The result is a slowing convergence of the outer iterations . The red-black scheme excites a 2

  14. On the convergence of a linesearch based proximal-gradient method for nonconvex optimization

    NASA Astrophysics Data System (ADS)

    Bonettini, S.; Loris, I.; Porta, F.; Prato, M.; Rebegoldi, S.

    2017-05-01

    We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications.

  15. Robust Transmission of H.264/AVC Streams Using Adaptive Group Slicing and Unequal Error Protection

    NASA Astrophysics Data System (ADS)

    Thomos, Nikolaos; Argyropoulos, Savvas; Boulgouris, Nikolaos V.; Strintzis, Michael G.

    2006-12-01

    We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error-resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. A novel technique for adaptive classification of macroblocks into three slice groups is also proposed. The optimal classification of macroblocks and the optimal channel rate allocation are achieved by iterating two interdependent steps. Dynamic programming techniques are used for the channel rate allocation process in order to reduce complexity. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms for transmission of H.264/AVC streams.

  16. Enhanced understanding of non-axisymmetric intrinsic and controlled field impacts in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    In, Y.; Park, J. -K.; Jeon, Y. M.

    Here, an extensive study of intrinsic and controlled non-axisymmetric field (δB) impacts in KSTAR has enhanced the understanding about non-axisymmetric field physics and its implications, in particular, on resonant magnetic perturbation (RMP) physics and power threshold (P th) for L–H transition. The n=1 intrinsic non-axisymmetric field in KSTAR was measured to remain as low as δB/B 0 ~ 4×10 –5 even at high-beta plasmas (β N ~ 2), which corresponds to approximately 20% below the targeted ITER tolerance level. As for the RMP edge-localized-modes (ELM) control, robust n=1 RMP ELM-crash-suppression has been not only sustained for more than ~90 τ E, but also confirmed to be compatible with rotating RMP. An optimal window of radial position of lower X-point (i.e. R x =more » $$1.44\\pm 0.02\\,$$ m) proved to be quite critical to reach full n=1 RMP-driven ELM-crash-suppression, while a constraint of the safety factor could be relaxed (q 95 = 5 $$\\pm $$ 0.25). A more encouraging finding was that even when R x cannot be positioned in the optimal window, another systematic scan in the vicinity of the previously optimal R x allows for a new optimal window with relatively small variations of plasma parameters. Also, we have addressed the importance of optimal phasing (i.e. toroidal phase difference between adjacent rows) for n=1 RMP-driven ELM control, consistent with an ideal plasma response modeling which could predict phasing-dependent ELM suppression windows. In support of ITER RMP study, intentionally misaligned RMPs have been found to be quite effective during ELM-mitigation stage in lowering the peaks of divertor heat flux, as well as in broadening the 'wet' areas. Besides, a systematic survey of P th dependence on non-axisymmetric field has revealed the potential limit of the merit of low intrinsic non-axisymmetry. Considering that the ITER RMP coils are composed of 3-rows, just like in KSTAR, further 3D physics study in KSTAR is expected to help us minimize the uncertainties of the ITER RMP coils, as well as establish an optimal 3D configuration for ITER and future reactors.« less

  17. Enhanced understanding of non-axisymmetric intrinsic and controlled field impacts in tokamaks

    NASA Astrophysics Data System (ADS)

    In, Y.; Park, J.-K.; Jeon, Y. M.; Kim, J.; Park, G. Y.; Ahn, J.-W.; Loarte, A.; Ko, W. H.; Lee, H. H.; Yoo, J. W.; Juhn, J. W.; Yoon, S. W.; Park, H.; Physics Task Force in KSTAR, 3D

    2017-11-01

    An extensive study of intrinsic and controlled non-axisymmetric field (δB) impacts in KSTAR has enhanced the understanding about non-axisymmetric field physics and its implications, in particular, on resonant magnetic perturbation (RMP) physics and power threshold (P th) for L-H transition. The n  =  1 intrinsic non-axisymmetric field in KSTAR was measured to remain as low as δB/B 0 ~ 4  ×  10-5 even at high-beta plasmas (β N ~ 2), which corresponds to approximately 20% below the targeted ITER tolerance level. As for the RMP edge-localized-modes (ELM) control, robust n  =  1 RMP ELM-crash-suppression has been not only sustained for more than ~90 τ E, but also confirmed to be compatible with rotating RMP. An optimal window of radial position of lower X-point (i.e. R x   =  1.44+/- 0.02 m) proved to be quite critical to reach full n  =  1 RMP-driven ELM-crash-suppression, while a constraint of the safety factor could be relaxed (q 95  =  5 +/- 0.25). A more encouraging finding was that even when R x cannot be positioned in the optimal window, another systematic scan in the vicinity of the previously optimal R x allows for a new optimal window with relatively small variations of plasma parameters. Also, we have addressed the importance of optimal phasing (i.e. toroidal phase difference between adjacent rows) for n  =  1 RMP-driven ELM control, consistent with an ideal plasma response modeling which could predict phasing-dependent ELM suppression windows. In support of ITER RMP study, intentionally misaligned RMPs have been found to be quite effective during ELM-mitigation stage in lowering the peaks of divertor heat flux, as well as in broadening the ‘wet’ areas. Besides, a systematic survey of P th dependence on non-axisymmetric field has revealed the potential limit of the merit of low intrinsic non-axisymmetry. Considering that the ITER RMP coils are composed of 3-rows, just like in KSTAR, further 3D physics study in KSTAR is expected to help us minimize the uncertainties of the ITER RMP coils, as well as establish an optimal 3D configuration for ITER and future reactors.

  18. Enhanced understanding of non-axisymmetric intrinsic and controlled field impacts in tokamaks

    DOE PAGES

    In, Y.; Park, J. -K.; Jeon, Y. M.; ...

    2017-08-24

    Here, an extensive study of intrinsic and controlled non-axisymmetric field (δB) impacts in KSTAR has enhanced the understanding about non-axisymmetric field physics and its implications, in particular, on resonant magnetic perturbation (RMP) physics and power threshold (P th) for L–H transition. The n=1 intrinsic non-axisymmetric field in KSTAR was measured to remain as low as δB/B 0 ~ 4×10 –5 even at high-beta plasmas (β N ~ 2), which corresponds to approximately 20% below the targeted ITER tolerance level. As for the RMP edge-localized-modes (ELM) control, robust n=1 RMP ELM-crash-suppression has been not only sustained for more than ~90 τ E, but also confirmed to be compatible with rotating RMP. An optimal window of radial position of lower X-point (i.e. R x =more » $$1.44\\pm 0.02\\,$$ m) proved to be quite critical to reach full n=1 RMP-driven ELM-crash-suppression, while a constraint of the safety factor could be relaxed (q 95 = 5 $$\\pm $$ 0.25). A more encouraging finding was that even when R x cannot be positioned in the optimal window, another systematic scan in the vicinity of the previously optimal R x allows for a new optimal window with relatively small variations of plasma parameters. Also, we have addressed the importance of optimal phasing (i.e. toroidal phase difference between adjacent rows) for n=1 RMP-driven ELM control, consistent with an ideal plasma response modeling which could predict phasing-dependent ELM suppression windows. In support of ITER RMP study, intentionally misaligned RMPs have been found to be quite effective during ELM-mitigation stage in lowering the peaks of divertor heat flux, as well as in broadening the 'wet' areas. Besides, a systematic survey of P th dependence on non-axisymmetric field has revealed the potential limit of the merit of low intrinsic non-axisymmetry. Considering that the ITER RMP coils are composed of 3-rows, just like in KSTAR, further 3D physics study in KSTAR is expected to help us minimize the uncertainties of the ITER RMP coils, as well as establish an optimal 3D configuration for ITER and future reactors.« less

  19. Energy Efficiency Maximization for WSNs with Simultaneous Wireless Information and Power Transfer

    PubMed Central

    Yu, Hongyan; Zhang, Yongqiang; Yang, Yuanyuan; Ji, Luyue

    2017-01-01

    Recently, the simultaneous wireless information and power transfer (SWIPT) technique has been regarded as a promising approach to enhance performance of wireless sensor networks with limited energy supply. However, from a green communication perspective, energy efficiency optimization for SWIPT system design has not been investigated in Wireless Rechargeable Sensor Networks (WRSNs). In this paper, we consider the tradeoffs between energy efficiency and three factors including spectral efficiency, the transmit power and outage target rate for two different modes, i.e., power splitting (PS) and time switching modes (TS), at the receiver. Moreover, we formulate the energy efficiency maximization problem subject to the constraints of minimum Quality of Service (QoS), minimum harvested energy and maximum transmission power as non-convex optimization problem. In particular, we focus on optimizing power control and power allocation policy in PS and TS modes to maximize energy efficiency of data transmission. For PS and TS modes, we propose the corresponding algorithm to characterize a non-convex optimization problem that takes into account the circuit power consumption and the harvested energy. By exploiting nonlinear fractional programming and Lagrangian dual decomposition, we propose suboptimal iterative algorithms to obtain the solutions of non-convex optimization problems. Furthermore, we derive the outage probability and effective throughput from the scenarios that the transmitter does not or partially know the channel state information (CSI) of the receiver. Simulation results illustrate that the proposed optimal iterative algorithm can achieve optimal solutions within a small number of iterations and various tradeoffs between energy efficiency and spectral efficiency, transmit power and outage target rate, respectively. PMID:28820496

  20. Energy Efficiency Maximization for WSNs with Simultaneous Wireless Information and Power Transfer.

    PubMed

    Yu, Hongyan; Zhang, Yongqiang; Guo, Songtao; Yang, Yuanyuan; Ji, Luyue

    2017-08-18

    Recently, the simultaneous wireless information and power transfer (SWIPT) technique has been regarded as a promising approach to enhance performance of wireless sensor networks with limited energy supply. However, from a green communication perspective, energy efficiency optimization for SWIPT system design has not been investigated in Wireless Rechargeable Sensor Networks (WRSNs). In this paper, we consider the tradeoffs between energy efficiency and three factors including spectral efficiency, the transmit power and outage target rate for two different modes, i.e., power splitting (PS) and time switching modes (TS), at the receiver. Moreover, we formulate the energy efficiency maximization problem subject to the constraints of minimum Quality of Service (QoS), minimum harvested energy and maximum transmission power as non-convex optimization problem. In particular, we focus on optimizing power control and power allocation policy in PS and TS modes to maximize energy efficiency of data transmission. For PS and TS modes, we propose the corresponding algorithm to characterize a non-convex optimization problem that takes into account the circuit power consumption and the harvested energy. By exploiting nonlinear fractional programming and Lagrangian dual decomposition, we propose suboptimal iterative algorithms to obtain the solutions of non-convex optimization problems. Furthermore, we derive the outage probability and effective throughput from the scenarios that the transmitter does not or partially know the channel state information (CSI) of the receiver. Simulation results illustrate that the proposed optimal iterative algorithm can achieve optimal solutions within a small number of iterations and various tradeoffs between energy efficiency and spectral efficiency, transmit power and outage target rate, respectively.

  1. The topographic development and areal parametric characterization of a stratified surface polished by mass finishing

    NASA Astrophysics Data System (ADS)

    Walton, Karl; Blunt, Liam; Fleming, Leigh

    2015-09-01

    Mass finishing is amongst the most widely used finishing processes in modern manufacturing, in applications from deburring to edge radiusing and polishing. Processing objectives are varied, ranging from the cosmetic to the functionally critical. One such critical application is the hydraulically smooth polishing of aero engine component gas-washed surfaces. In this, and many other applications the drive to improve process control and finish tolerance is ever present. Considering its widespread use mass finishing has seen limited research activity, particularly with respect to surface characterization. The objectives of the current paper are to; characterise the mass finished stratified surface and its development process using areal surface parameters, provide guidance on the optimal parameters and sampling method to characterise this surface type for a given application, and detail the spatial variation in surface topography due to coupon edge shadowing. Blasted and peened square plate coupons in titanium alloy are wet (vibro) mass finished iteratively with increasing duration. Measurement fields are precisely relocated between iterations by fixturing and an image superimposition alignment technique. Surface topography development is detailed with ‘log of process duration’ plots of the ‘areal parameters for scale-limited stratified functional surfaces’, (the Sk family). Characteristic features of the Smr2 plot are seen to map out the processing of peak, core and dale regions in turn. These surface process regions also become apparent in the ‘log of process duration’ plot for Sq, where lower core and dale regions are well modelled by logarithmic functions. Surface finish (Ra or Sa) with mass finishing duration is currently predicted with an exponential model. This model is shown to be limited for the current surface type at a critical range of surface finishes. Statistical analysis provides a group of areal parameters including; Vvc, Sq, and Sdq, showing optimal discrimination for a specific range of surface finish outcomes. As a consequence of edge shadowing surface segregation is suggested for characterization purposes.

  2. Electrical Capacitance Tomography Measurement of the Migration of Ice Frontal Surface in Freezing Soil

    NASA Astrophysics Data System (ADS)

    Liu, J.; Suo, X. M.; Zhou, S. S.; Meng, S. Q.; Chen, S. S.; Mu, H. P.

    2016-12-01

    The tracking of the migration of ice frontal surface is crucial for the understanding of the underlying physical mechanisms in freezing soil. Owing to the distinct advantages, including non-invasive sensing, high safety, low cost and high data acquisition speed, the electrical capacitance tomography (ECT) is considered to be a promising visualization measurement method. In this paper, the ECT method is used to visualize the migration of ice frontal surface in freezing soil. With the main motivation of the improvement of imaging quality, a loss function with multiple regularizers that incorporate the prior formation related to the imaging objects is proposed to cast the ECT image reconstruction task into an optimization problem. An iteration scheme that integrates the superiority of the split Bregman iteration (SBI) method is developed for searching for the optimal solution of the proposed loss function. An unclosed electrodes sensor is designed for satisfying the requirements of practical measurements. An experimental system of one dimensional freezing in frozen soil is constructed, and the ice frontal surface migration in the freezing process of the wet soil sample containing five percent of moisture is measured. The visualization measurement results validate the feasibility and effectiveness of the ECT visualization method

  3. Prostate Brachytherapy Seed Reconstruction with Gaussian Blurring and Optimal Coverage Cost

    PubMed Central

    Lee, Junghoon; Liu, Xiaofeng; Jain, Ameet K.; Song, Danny Y.; Burdette, E. Clif; Prince, Jerry L.; Fichtinger, Gabor

    2009-01-01

    Intraoperative dosimetry in prostate brachytherapy requires localization of the implanted radioactive seeds. A tomosynthesis-based seed reconstruction method is proposed. A three-dimensional volume is reconstructed from Gaussian-blurred projection images and candidate seed locations are computed from the reconstructed volume. A false positive seed removal process, formulated as an optimal coverage problem, iteratively removes “ghost” seeds that are created by tomosynthesis reconstruction. In an effort to minimize pose errors that are common in conventional C-arms, initial pose parameter estimates are iteratively corrected by using the detected candidate seeds as fiducials, which automatically “focuses” the collected images and improves successive reconstructed volumes. Simulation results imply that the implanted seed locations can be estimated with a detection rate of ≥ 97.9% and ≥ 99.3% from three and four images, respectively, when the C-arm is calibrated and the pose of the C-arm is known. The algorithm was also validated on phantom data sets successfully localizing the implanted seeds from four or five images. In a Phase-1 clinical trial, we were able to localize the implanted seeds from five intraoperative fluoroscopy images with 98.8% (STD=1.6) overall detection rate. PMID:19605321

  4. Optimizing model: insemination, replacement, seasonal production, and cash flow.

    PubMed

    DeLorenzo, M A; Spreen, T H; Bryan, G R; Beede, D K; Van Arendonk, J A

    1992-03-01

    Dynamic programming to solve the Markov decision process problem of optimal insemination and replacement decisions was adapted to address large dairy herd management decision problems in the US. Expected net present values of cow states (151,200) were used to determine the optimal policy. States were specified by class of parity (n = 12), production level (n = 15), month of calving (n = 12), month of lactation (n = 16), and days open (n = 7). Methodology optimized decisions based on net present value of an individual cow and all replacements over a 20-yr decision horizon. Length of decision horizon was chosen to ensure that optimal policies were determined for an infinite planning horizon. Optimization took 286 s of central processing unit time. The final probability transition matrix was determined, in part, by the optimal policy. It was estimated iteratively to determine post-optimization steady state herd structure, milk production, replacement, feed inputs and costs, and resulting cash flow on a calendar month and annual basis if optimal policies were implemented. Implementation of the model included seasonal effects on lactation curve shapes, estrus detection rates, pregnancy rates, milk prices, replacement costs, cull prices, and genetic progress. Other inputs included calf values, values of dietary TDN and CP per kilogram, and discount rate. Stochastic elements included conception (and, thus, subsequent freshening), cow milk production level within herd, and survival. Validation of optimized solutions was by separate simulation model, which implemented policies on a simulated herd and also described herd dynamics during transition to optimized structure.

  5. Design optimization of first wall and breeder unit module size for the Indian HCCB blanket module

    NASA Astrophysics Data System (ADS)

    Deepak, SHARMA; Paritosh, CHAUDHURI

    2018-04-01

    The Indian test blanket module (TBM) program in ITER is one of the major steps in the Indian fusion reactor program for carrying out the R&D activities in the critical areas like design of tritium breeding blankets relevant to future Indian fusion devices (ITER relevant and DEMO). The Indian Lead–Lithium Cooled Ceramic Breeder (LLCB) blanket concept is one of the Indian DEMO relevant TBM, to be tested in ITER as a part of the TBM program. Helium-Cooled Ceramic Breeder (HCCB) is an alternative blanket concept that consists of lithium titanate (Li2TiO3) as ceramic breeder (CB) material in the form of packed pebble beds and beryllium as the neutron multiplier. Specifically, attentions are given to the optimization of first wall coolant channel design and size of breeder unit module considering coolant pressure and thermal loads for the proposed Indian HCCB blanket based on ITER relevant TBM and loading conditions. These analyses will help proceeding further in designing blankets for loads relevant to the future fusion device.

  6. WE-AB-303-09: Rapid Projection Computations for On-Board Digital Tomosynthesis in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, AS; Sun, X; Pitsianis, N

    2015-06-15

    Purpose: To facilitate fast and accurate iterative volumetric image reconstruction from limited-angle on-board projections. Methods: Intrafraction motion hinders the clinical applicability of modern radiotherapy techniques, such as lung stereotactic body radiation therapy (SBRT). The LIVE system may impact clinical practice by recovering volumetric information via Digital Tomosynthesis (DTS), thus entailing low time and radiation dose for image acquisition during treatment. The DTS is estimated as a deformation of prior CT via iterative registration with on-board images; this shifts the challenge to the computational domain, owing largely to repeated projection computations across iterations. We address this issue by composing efficient digitalmore » projection operators from their constituent parts. This allows us to separate the static (projection geometry) and dynamic (volume/image data) parts of projection operations by means of pre-computations, enabling fast on-board processing, while also relaxing constraints on underlying numerical models (e.g. regridding interpolation kernels). Further decoupling the projectors into simpler ones ensures the incurred memory overhead remains low, within the capacity of a single GPU. These operators depend only on the treatment plan and may be reused across iterations and patients. The dynamic processing load is kept to a minimum and maps well to the GPU computational model. Results: We have integrated efficient, pre-computable modules for volumetric ray-casting and FDK-based back-projection with the LIVE processing pipeline. Our results show a 60x acceleration of the DTS computations, compared to the previous version, using a single GPU; presently, reconstruction is attained within a couple of minutes. The present implementation allows for significant flexibility in terms of the numerical and operational projection model; we are investigating the benefit of further optimizations and accurate digital projection sub-kernels. Conclusion: Composable projection operators constitute a versatile research tool which can greatly accelerate iterative registration algorithms and may be conducive to the clinical applicability of LIVE. National Institutes of Health Grant No. R01-CA184173; GPU donation by NVIDIA Corporation.« less

  7. Status of Europe's contribution to the ITER EC system

    NASA Astrophysics Data System (ADS)

    Albajar, F.; Aiello, G.; Alberti, S.; Arnold, F.; Avramidis, K.; Bader, M.; Batista, R.; Bertizzolo, R.; Bonicelli, T.; Braunmueller, F.; Brescan, C.; Bruschi, A.; von Burg, B.; Camino, K.; Carannante, G.; Casarin, V.; Castillo, A.; Cauvard, F.; Cavalieri, C.; Cavinato, M.; Chavan, R.; Chelis, J.; Cismondi, F.; Combescure, D.; Darbos, C.; Farina, D.; Fasel, D.; Figini, L.; Gagliardi, M.; Gandini, F.; Gantenbein, G.; Gassmann, T.; Gessner, R.; Goodman, T. P.; Gracia, V.; Grossetti, G.; Heemskerk, C.; Henderson, M.; Hermann, V.; Hogge, J. P.; Illy, S.; Ioannidis, Z.; Jelonnek, J.; Jin, J.; Kasparek, W.; Koning, J.; Krause, A. S.; Landis, J. D.; Latsas, G.; Li, F.; Mazzocchi, F.; Meier, A.; Moro, A.; Nousiainen, R.; Purohit, D.; Nowak, S.; Omori, T.; van Oosterhout, J.; Pacheco, J.; Pagonakis, I.; Platania, P.; Poli, E.; Preis, A. K.; Ronden, D.; Rozier, Y.; Rzesnicki, T.; Saibene, G.; Sanchez, F.; Sartori, F.; Sauter, O.; Scherer, T.; Schlatter, C.; Schreck, S.; Serikov, A.; Siravo, U.; Sozzi, C.; Spaeh, P.; Spichiger, A.; Strauss, D.; Takahashi, K.; Thumm, M.; Tigelis, I.; Vaccaro, A.; Vomvoridis, J.; Tran, M. Q.; Weinhorst, B.

    2015-03-01

    The electron cyclotron (EC) system of ITER for the initial configuration is designed to provide 20MW of RF power into the plasma during 3600s and a duty cycle of up to 25% for heating and (co and counter) non-inductive current drive, also used to control the MHD plasma instabilities. The EC system is being procured by 5 domestic agencies plus the ITER Organization (IO). F4E has the largest fraction of the EC procurements, which includes 8 high voltage power supplies (HVPS), 6 gyrotrons, the ex-vessel waveguides (includes isolation valves and diamond windows) for all launchers, 4 upper launchers and the main control system. F4E is working with IO to improve the overall design of the EC system by integrating consolidated technological advances, simplifying the interfaces, and doing global engineering analysis and assessments of EC heating and current drive physics and technology capabilities. Examples are the optimization of the HVPS and gyrotron requirements and performance relative to power modulation for MHD control, common qualification programs for diamond window procurements, assessment of the EC grounding system, and the optimization of the launcher steering angles for improved EC access. Here we provide an update on the status of Europe's contribution to the ITER EC system, and a summary of the global activities underway by F4E in collaboration with IO for the optimization of the subsystems.

  8. Self-adaptive multi-objective harmony search for optimal design of water distribution networks

    NASA Astrophysics Data System (ADS)

    Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon

    2017-11-01

    In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.

  9. Memetic Algorithm-Based Multi-Objective Coverage Optimization for Wireless Sensor Networks

    PubMed Central

    Chen, Zhi; Li, Shuai; Yue, Wenjing

    2014-01-01

    Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms. PMID:25360579

  10. Memetic algorithm-based multi-objective coverage optimization for wireless sensor networks.

    PubMed

    Chen, Zhi; Li, Shuai; Yue, Wenjing

    2014-10-30

    Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms.

  11. Viscous Aerodynamic Shape Optimization with Installed Propulsion Effects

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Seidel, Jonathan A.; Rallabhandi, Sriram K.

    2017-01-01

    Aerodynamic shape optimization is demonstrated to tailor the under-track pressure signature of a conceptual low-boom supersonic aircraft. Primarily, the optimization reduces nearfield pressure waveforms induced by propulsion integration effects. For computational efficiency, gradient-based optimization is used and coupled to the discrete adjoint formulation of the Reynolds-averaged Navier Stokes equations. The engine outer nacelle, nozzle, and vertical tail fairing are axi-symmetrically parameterized, while the horizontal tail is shaped using a wing-based parameterization. Overall, 48 design variables are coupled to the geometry and used to deform the outer mold line. During the design process, an inequality drag constraint is enforced to avoid major compromise in aerodynamic performance. Linear elastic mesh morphing is used to deform volume grids between design iterations. The optimization is performed at Mach 1.6 cruise, assuming standard day altitude conditions at 51,707-ft. To reduce uncertainty, a coupled thermodynamic engine cycle model is employed that captures installed inlet performance effects on engine operation.

  12. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.

    PubMed

    Baranwal, Vipul K; Pandey, Ram K; Singh, Om P

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  13. Optimization of Residual Stresses in MMC's through Process Parameter Control and the use of Heterogeneous Compensating/Compliant Interfacial Layers. OPTCOMP2 User's Guide

    NASA Technical Reports Server (NTRS)

    Pindera, Marek-Jerzy; Salzar, Robert S.

    1996-01-01

    A user's guide for the computer program OPTCOMP2 is presented in this report. This program provides a capability to optimize the fabrication or service-induced residual stresses in unidirectional metal matrix composites subjected to combined thermomechanical axisymmetric loading by altering the processing history, as well as through the microstructural design of interfacial fiber coatings. The user specifies the initial architecture of the composite and the load history, with the constituent materials being elastic, plastic, viscoplastic, or as defined by the 'user-defined' constitutive model, in addition to the objective function and constraints, through a user-friendly data input interface. The optimization procedure is based on an efficient solution methodology for the inelastic response of a fiber/interface layer(s)/matrix concentric cylinder model where the interface layers can be either homogeneous or heterogeneous. The response of heterogeneous layers is modeled using Aboudi's three-dimensional method of cells micromechanics model. The commercial optimization package DOT is used for the nonlinear optimization problem. The solution methodology for the arbitrarily layered cylinder is based on the local-global stiffness matrix formulation and Mendelson's iterative technique of successive elastic solutions developed for elastoplastic boundary-value problems. The optimization algorithm employed in DOT is based on the method of feasible directions.

  14. Learning optimal embedded cascades.

    PubMed

    Saberian, Mohammad Javad; Vasconcelos, Nuno

    2012-10-01

    The problem of automatic and optimal design of embedded object detector cascades is considered. Two main challenges are identified: optimization of the cascade configuration and optimization of individual cascade stages, so as to achieve the best tradeoff between classification accuracy and speed, under a detection rate constraint. Two novel boosting algorithms are proposed to address these problems. The first, RCBoost, formulates boosting as a constrained optimization problem which is solved with a barrier penalty method. The constraint is the target detection rate, which is met at all iterations of the boosting process. This enables the design of embedded cascades of known configuration without extensive cross validation or heuristics. The second, ECBoost, searches over cascade configurations to achieve the optimal tradeoff between classification risk and speed. The two algorithms are combined into an overall boosting procedure, RCECBoost, which optimizes both the cascade configuration and its stages under a detection rate constraint, in a fully automated manner. Extensive experiments in face, car, pedestrian, and panda detection show that the resulting detectors achieve an accuracy versus speed tradeoff superior to those of previous methods.

  15. Comparison of Low-Thrust Control Laws for Application in Planetocentric Space

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Sjauw, Waldy K.; Smith, David A.

    2014-01-01

    Recent interest at NASA for the application of solar electric propulsion for the transfer of significant payloads in cislunar space has led to the development of high-fidelity simulations of such missions. With such transfers involving transfer times on the order of months, simulation time can be significant. In the past, the examination of such missions typically began with the use of lower-fidelity trajectory optimization tools such as SEPSPOT to develop and tune guidance laws which delivered optimal or near- optimal trajectories, where optimal is generally defined as minimizing propellant expenditure or time of flight. The transfer of these solutions to a high-fidelity simulation is typically an iterative process whereby the initial solution may nearly, but not precisely, meet mission objectives. Further tuning of the guidance algorithm is typically necessary when accounting for high-fidelity perturbations such as those due to more detailed gravity models, secondary-body effects, solar radiation pressure, etc. While trajectory optimization is a useful method for determining optimal performance metrics, algorithms which deliver nearly optimal performance with minimal tuning are an attractive alternative.

  16. Unimodular sequence design under frequency hopping communication compatibility requirements

    NASA Astrophysics Data System (ADS)

    Ge, Peng; Cui, Guolong; Kong, Lingjiang; Yang, Jianyu

    2016-12-01

    The integrated design for both radar and anonymous communication has drawn more attention recently since wireless communication system appeals to enhance security and reliability. Given the frequency hopping (FH) communication system, an effective way to realize integrated design is to meet the spectrum compatibility between these two systems. The paper deals with a unimodular sequence design technique which considers optimizing both the spectrum compatibility and peak sidelobes levels (PSL) of auto-correlation function (ACF). The spectrum compatibility requirement realizes anonymous communication for the FH system and provides this system lower probability of intercept (LPI) since the spectrum of the FH system is hidden in that of the radar system. The proposed algorithm, named generalized fitting template (GFT) technique, converts the sequence optimization design problem to a iterative fitting process. In this process, the power spectrum density (PSD) and PSL behaviors of the generated sequences fit both PSD and PSL templates progressively. Two templates are established based on the spectrum compatibility requirement and the expected PSL. As noted, in order to ensure the communication security and reliability, spectrum compatibility requirement is given a higher priority to achieve in the GFT algorithm. This algorithm realizes this point by adjusting the weight adaptively between these two terms during the iteration process. The simulation results are analyzed in terms of bit error rate (BER), PSD, PSL, and signal-interference rate (SIR) for both the radar and FH systems. The performance of GFT is compared with SCAN, CAN, FRE, CYC, and MAT algorithms in the above aspects, which shows its good effectiveness.

  17. Ckmeans.1d.dp: Optimal k-means Clustering in One Dimension by Dynamic Programming.

    PubMed

    Wang, Haizhou; Song, Mingzhou

    2011-12-01

    The heuristic k -means algorithm, widely used for cluster analysis, does not guarantee optimality. We developed a dynamic programming algorithm for optimal one-dimensional clustering. The algorithm is implemented as an R package called Ckmeans.1d.dp . We demonstrate its advantage in optimality and runtime over the standard iterative k -means algorithm.

  18. Analysis and optimization of the active rigidity joint

    NASA Astrophysics Data System (ADS)

    Manzo, Justin; Garcia, Ephrahim

    2009-12-01

    The active rigidity joint is a composite mechanism using shape memory alloy and shape memory polymer to create a passively rigid joint with thermally activated deflection. A new model for the active rigidity joint relaxes constraints of earlier methods and allows for more accurate deflection predictions compared to finite element results. Using an iterative process to determine the strain distribution and deflection, the method demonstrates accurate results for both surface bonded and embedded actuators with and without external loading. Deflection capabilities are explored through simulated annealing heuristic optimization using a variety of cost functions to explore actuator performance. A family of responses presents actuator characteristics in terms of load bearing and deflection capabilities given material and thermal constraints. Optimization greatly expands the available workspace of the active rigidity joint from the initial configuration, demonstrating specific work capabilities comparable to those of muscle tissue.

  19. Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.

    PubMed

    Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J

    2017-09-01

    A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.

  20. The Normalized-Rate Iterative Algorithm: A Practical Dynamic Spectrum Management Method for DSL

    NASA Astrophysics Data System (ADS)

    Statovci, Driton; Nordström, Tomas; Nilsson, Rickard

    2006-12-01

    We present a practical solution for dynamic spectrum management (DSM) in digital subscriber line systems: the normalized-rate iterative algorithm (NRIA). Supported by a novel optimization problem formulation, the NRIA is the only DSM algorithm that jointly addresses spectrum balancing for frequency division duplexing systems and power allocation for the users sharing a common cable bundle. With a focus on being implementable rather than obtaining the highest possible theoretical performance, the NRIA is designed to efficiently solve the DSM optimization problem with the operators' business models in mind. This is achieved with the help of two types of parameters: the desired network asymmetry and the desired user priorities. The NRIA is a centralized DSM algorithm based on the iterative water-filling algorithm (IWFA) for finding efficient power allocations, but extends the IWFA by finding the achievable bitrates and by optimizing the bandplan. It is compared with three other DSM proposals: the IWFA, the optimal spectrum balancing algorithm (OSBA), and the bidirectional IWFA (bi-IWFA). We show that the NRIA achieves better bitrate performance than the IWFA and the bi-IWFA. It can even achieve performance almost as good as the OSBA, but with dramatically lower requirements on complexity. Additionally, the NRIA can achieve bitrate combinations that cannot be supported by any other DSM algorithm.

  1. Utility of coupling nonlinear optimization methods with numerical modeling software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, M.J.

    1996-08-05

    Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less

  2. Topology-optimization-based design method of flexures for mounting the primary mirror of a large-aperture space telescope.

    PubMed

    Hu, Rui; Liu, Shutian; Li, Quhao

    2017-05-20

    For the development of a large-aperture space telescope, one of the key techniques is the method for designing the flexures for mounting the primary mirror, as the flexures are the key components. In this paper, a topology-optimization-based method for designing flexures is presented. The structural performances of the mirror system under multiple load conditions, including static gravity and thermal loads, as well as the dynamic vibration, are considered. The mirror surface shape error caused by gravity and the thermal effect is treated as the objective function, and the first-order natural frequency of the mirror structural system is taken as the constraint. The pattern repetition constraint is added, which can ensure symmetrical material distribution. The topology optimization model for flexure design is established. The substructuring method is also used to condense the degrees of freedom (DOF) of all the nodes of the mirror system, except for the nodes that are linked to the mounting flexures, to reduce the computation effort during the optimization iteration process. A potential optimized configuration is achieved by solving the optimization model and post-processing. A detailed shape optimization is subsequently conducted to optimize its dimension parameters. Our optimization method deduces new mounting structures that significantly enhance the optical performance of the mirror system compared to the traditional methods, which only focus on the parameters of existing structures. Design results demonstrate the effectiveness of the proposed optimization method.

  3. Variable-Metric Algorithm For Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Frick, James D.

    1989-01-01

    Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.

  4. Preliminary Process Design of ITER ELM Coil Bracket Brazing

    NASA Astrophysics Data System (ADS)

    LI, Xiangbin; SHI, Yi

    2015-03-01

    With the technical requirement of the International Thermonuclear Experimental Reactor (ITER) project, the manufacture and assembly technology of the mid Edge Localized Modes (ELM) coil was developed by the Institute of Plasma Physics, Chinese Academy of Science (ASIPP). As the gap between the bracket and the Stainless Steel jacketed and Mineral Insulated Conductor (SSMIC) can be larger than 0.5 mm instead of 0.01 mm to 0.1 mm as in normal industrial cases, the process of mid ELM coil bracket brazing to the SSMICT becomes quiet challenging, from a technical viewpoint. This paper described the preliminary design of ELM coil bracket brazing to the SSMIC process, the optimal bracket brazing curve and the thermal simulation of the bracket furnace brazing method developed by ANSYS. BAg-6 foil (Bag50Cu34Zn16) plus BAg-1a paste (Bag45CuZnCd) solders were chosen as the brazing filler. By testing an SSMICT prototype, it is shown that the average gap between the bracket and the SSMIC could be controlled to 0.2-0.3 mm, and that there were few voids in the brazing surface. The results also verified that the preliminary design had a favorable heat conducting performance in the bracket.

  5. 3D algebraic iterative reconstruction for cone-beam x-ray differential phase-contrast computed tomography.

    PubMed

    Fu, Jian; Hu, Xinhua; Velroyen, Astrid; Bech, Martin; Jiang, Ming; Pfeiffer, Franz

    2015-01-01

    Due to the potential of compact imaging systems with magnified spatial resolution and contrast, cone-beam x-ray differential phase-contrast computed tomography (DPC-CT) has attracted significant interest. The current proposed FDK reconstruction algorithm with the Hilbert imaginary filter will induce severe cone-beam artifacts when the cone-beam angle becomes large. In this paper, we propose an algebraic iterative reconstruction (AIR) method for cone-beam DPC-CT and report its experiment results. This approach considers the reconstruction process as the optimization of a discrete representation of the object function to satisfy a system of equations that describes the cone-beam DPC-CT imaging modality. Unlike the conventional iterative algorithms for absorption-based CT, it involves the derivative operation to the forward projections of the reconstructed intermediate image to take into account the differential nature of the DPC projections. This method is based on the algebraic reconstruction technique, reconstructs the image ray by ray, and is expected to provide better derivative estimates in iterations. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a mini-focus x-ray tube source. It is shown that the proposed method can reduce the cone-beam artifacts and performs better than FDK under large cone-beam angles. This algorithm is of interest for future cone-beam DPC-CT applications.

  6. Full-order optimal compensators for flow control: the multiple inputs case

    NASA Astrophysics Data System (ADS)

    Semeraro, Onofrio; Pralits, Jan O.

    2018-03-01

    Flow control has been the subject of numerous experimental and theoretical works. We analyze full-order, optimal controllers for large dynamical systems in the presence of multiple actuators and sensors. The full-order controllers do not require any preliminary model reduction or low-order approximation: this feature allows us to assess the optimal performance of an actuated flow without relying on any estimation process or further hypothesis on the disturbances. We start from the original technique proposed by Bewley et al. (Meccanica 51(12):2997-3014, 2016. https://doi.org/10.1007/s11012-016-0547-3), the adjoint of the direct-adjoint (ADA) algorithm. The algorithm is iterative and allows bypassing the solution of the algebraic Riccati equation associated with the optimal control problem, typically infeasible for large systems. In this numerical work, we extend the ADA iteration into a more general framework that includes the design of controllers with multiple, coupled inputs and robust controllers (H_{∞} methods). First, we demonstrate our results by showing the analytical equivalence between the full Riccati solutions and the ADA approximations in the multiple inputs case. In the second part of the article, we analyze the performance of the algorithm in terms of convergence of the solution, by comparing it with analogous techniques. We find an excellent scalability with the number of inputs (actuators), making the method a viable way for full-order control design in complex settings. Finally, the applicability of the algorithm to fluid mechanics problems is shown using the linearized Kuramoto-Sivashinsky equation and the Kármán vortex street past a two-dimensional cylinder.

  7. Improvements on a non-invasive, parameter-free approach to inverse form finding

    NASA Astrophysics Data System (ADS)

    Landkammer, P.; Caspari, M.; Steinmann, P.

    2017-08-01

    Our objective is to determine the optimal undeformed workpiece geometry (material configuration) within forming processes when the prescribed deformed geometry (spatial configuration) is given. For solving the resulting shape optimization problem—also denoted as inverse form finding—we use a novel parameter-free approach, which relocates in each iteration the material nodal positions as design variables. The spatial nodal positions computed by an elasto-plastic finite element (FE) forming simulation are compared with their prescribed values. The objective function expresses a least-squares summation of the differences between the computed and the prescribed nodal positions. Here, a recently developed shape optimization approach (Landkammer and Steinmann in Comput Mech 57(2):169-191, 2016) is investigated with a view to enhance its stability and efficiency. Motivated by nonlinear optimization theory a detailed justification of the algorithm is given. Furthermore, a classification according to shape changing design, fixed and controlled nodal coordinates is introduced. Two examples with large elasto-plastic strains demonstrate that using a superconvergent patch recovery technique instead of a least-squares (L2 )-smoothing improves the efficiency. Updating the interior discretization nodes by solving a fictitious elastic problem also reduces the number of required FE iterations and avoids severe mesh distortions. Furthermore, the impact of the inclusion of the second deformation gradient in the Hessian of the Quasi-Newton approach is analyzed. Inverse form finding is a crucial issue in metal forming applications. As a special feature, the approach is designed to be coupled in a non-invasive fashion to arbitrary FE software.

  8. Improvements on a non-invasive, parameter-free approach to inverse form finding

    NASA Astrophysics Data System (ADS)

    Landkammer, P.; Caspari, M.; Steinmann, P.

    2018-04-01

    Our objective is to determine the optimal undeformed workpiece geometry (material configuration) within forming processes when the prescribed deformed geometry (spatial configuration) is given. For solving the resulting shape optimization problem—also denoted as inverse form finding—we use a novel parameter-free approach, which relocates in each iteration the material nodal positions as design variables. The spatial nodal positions computed by an elasto-plastic finite element (FE) forming simulation are compared with their prescribed values. The objective function expresses a least-squares summation of the differences between the computed and the prescribed nodal positions. Here, a recently developed shape optimization approach (Landkammer and Steinmann in Comput Mech 57(2):169-191, 2016) is investigated with a view to enhance its stability and efficiency. Motivated by nonlinear optimization theory a detailed justification of the algorithm is given. Furthermore, a classification according to shape changing design, fixed and controlled nodal coordinates is introduced. Two examples with large elasto-plastic strains demonstrate that using a superconvergent patch recovery technique instead of a least-squares (L2)-smoothing improves the efficiency. Updating the interior discretization nodes by solving a fictitious elastic problem also reduces the number of required FE iterations and avoids severe mesh distortions. Furthermore, the impact of the inclusion of the second deformation gradient in the Hessian of the Quasi-Newton approach is analyzed. Inverse form finding is a crucial issue in metal forming applications. As a special feature, the approach is designed to be coupled in a non-invasive fashion to arbitrary FE software.

  9. Iterative Refinement of a Binding Pocket Model: Active Computational Steering of Lead Optimization

    PubMed Central

    2012-01-01

    Computational approaches for binding affinity prediction are most frequently demonstrated through cross-validation within a series of molecules or through performance shown on a blinded test set. Here, we show how such a system performs in an iterative, temporal lead optimization exercise. A series of gyrase inhibitors with known synthetic order formed the set of molecules that could be selected for “synthesis.” Beginning with a small number of molecules, based only on structures and activities, a model was constructed. Compound selection was done computationally, each time making five selections based on confident predictions of high activity and five selections based on a quantitative measure of three-dimensional structural novelty. Compound selection was followed by model refinement using the new data. Iterative computational candidate selection produced rapid improvements in selected compound activity, and incorporation of explicitly novel compounds uncovered much more diverse active inhibitors than strategies lacking active novelty selection. PMID:23046104

  10. Optimization studies of the ITER low field side reflectometer.

    PubMed

    Diem, S J; Wilgen, J B; Bigelow, T S; Hanson, G R; Harvey, R W; Smirnov, A P

    2010-10-01

    Microwave reflectometry will be used on ITER to measure the electron density profile, density fluctuations due to MHD/turbulence, edge localized mode (ELM) density transients, and as an L-H transition monitor. The ITER low field side reflectometer system will measure both core and edge quantities using multiple antenna arrays spanning frequency ranges of 15-155 GHz for the O-mode system and 55-220 GHz for the X-mode system. Optimization studies using the GENRAY ray-tracing code have been done for edge and core measurements. The reflectometer launchers will utilize the HE11 mode launched from circular corrugated waveguide. The launched beams are assumed to be Gaussian with a beam waist diameter of 0.643 times the waveguide diameter. Optimum launcher size and placement are investigated by computing the antenna coupling between launchers, assuming the launched and received beams have a Gaussian beam pattern.

  11. DIII-D accomplishments and plans in support of fusion next steps

    DOE PAGES

    Buttery, R. J; Eidietis, N.; Holcomb, C.; ...

    2013-06-01

    DIII-D is using its flexibility and diagnostics to address the critical science required to enable next step fusion devices. We have adapted operating scenarios for ITER to low torque and are now being optimized for transport. Three ELM mitigation scenarios have been developed to near-ITER parameters. New control techniques are managing the most challenging plasma instabilities. Disruption mitigation tools show promising dissipation strategies for runaway electrons and heat load. An off axis neutral beam upgrade has enabled sustainment of high βN capable steady state regimes. Divertor research is identifying the challenge, physics and candidate solutions for handling the hot plasmamore » exhaust with notable progress in heat flux reduction using the snowflake configuration. Our work is helping optimize design choices and prepare the scientific tools for operation in ITER, and resolve key elements of the plasma configuration and divertor solution for an FNSF.« less

  12. Improved spatial resolution and lower-dose pediatric CT imaging: a feasibility study to evaluate narrowing the X-ray photon energy spectrum.

    PubMed

    Benz, Mark G; Benz, Matthew W; Birnbaum, Steven B; Chason, Eric; Sheldon, Brian W; McGuire, Dale

    2014-08-01

    This feasibility study has shown that improved spatial resolution and reduced radiation dose can be achieved in pediatric CT by narrowing the X-ray photon energy spectrum. This is done by placing a hafnium filter between the X-ray generator and a pediatric abdominal phantom. A CT system manufactured in 1999 that was in the process of being remanufactured was used as the platform for this study. This system had the advantage of easy access to the X-ray generator for modifications to change the X-ray photon energy spectrum; it also had the disadvantage of not employing the latest post-imaging noise reduction iterative reconstruction technology. Because we observed improvements after changing the X-ray photon energy spectrum, we recommend a future study combining this change with an optimized iterative reconstruction noise reduction technique.

  13. Lunar lander conceptual design

    NASA Technical Reports Server (NTRS)

    Stecklein, J. M.; Petro, A. J.; Stump, W. R.; Adorjan, A. S.; Chambers, T. V.; Donofrio, M.; Hirasaki, J. K.; Morris, O. G.; Nudd, G.; Rawlings, R. P.

    1992-01-01

    This paper is a first look at the problems of building a lunar lander to support a small lunar surface base. A series of trade studies was performed to define the lander. The initial trades concerned choosing number of stages, payload mass, parking orbit altitude, and propellant type. Other important trades and issues included plane change capability, propellant loading and maintenance location, and reusability considerations. Given a rough baseline, the systems were then reviewed. A conceptual design was then produced. The process was carried through only one iteration. Many more iterations are needed. A transportation system using reusable, aerobraked orbital transfer vehicles (OTV's) is assumed. These OTV's are assumed to be based and maintained at a low Earth orbit (LEO) space station, optimized for transportation functions. Single- and two-stage OTV stacks are considered. The OTV's make the translunar injection (TLI), lunar orbit insertion (LOI), and trans-Earth injection (TEI) burns, as well as midcourse and perigee raise maneuvers.

  14. Efficient computation paths for the systematic analysis of sensitivities

    NASA Astrophysics Data System (ADS)

    Greppi, Paolo; Arato, Elisabetta

    2013-01-01

    A systematic sensitivity analysis requires computing the model on all points of a multi-dimensional grid covering the domain of interest, defined by the ranges of variability of the inputs. The issues to efficiently perform such analyses on algebraic models are handling solution failures within and close to the feasible region and minimizing the total iteration count. Scanning the domain in the obvious order is sub-optimal in terms of total iterations and is likely to cause many solution failures. The problem of choosing a better order can be translated geometrically into finding Hamiltonian paths on certain grid graphs. This work proposes two paths, one based on a mixed-radix Gray code and the other, a quasi-spiral path, produced by a novel heuristic algorithm. Some simple, easy-to-visualize examples are presented, followed by performance results for the quasi-spiral algorithm and the practical application of the different paths in a process simulation tool.

  15. A unified, multifidelity quasi-newton optimization method with application to aero-structural designa

    NASA Astrophysics Data System (ADS)

    Bryson, Dean Edward

    A model's level of fidelity may be defined as its accuracy in faithfully reproducing a quantity or behavior of interest of a real system. Increasing the fidelity of a model often goes hand in hand with increasing its cost in terms of time, money, or computing resources. The traditional aircraft design process relies upon low-fidelity models for expedience and resource savings. However, the reduced accuracy and reliability of low-fidelity tools often lead to the discovery of design defects or inadequacies late in the design process. These deficiencies result either in costly changes or the acceptance of a configuration that does not meet expectations. The unknown opportunity cost is the discovery of superior vehicles that leverage phenomena unknown to the designer and not illuminated by low-fidelity tools. Multifidelity methods attempt to blend the increased accuracy and reliability of high-fidelity models with the reduced cost of low-fidelity models. In building surrogate models, where mathematical expressions are used to cheaply approximate the behavior of costly data, low-fidelity models may be sampled extensively to resolve the underlying trend, while high-fidelity data are reserved to correct inaccuracies at key locations. Similarly, in design optimization a low-fidelity model may be queried many times in the search for new, better designs, with a high-fidelity model being exercised only once per iteration to evaluate the candidate design. In this dissertation, a new multifidelity, gradient-based optimization algorithm is proposed. It differs from the standard trust region approach in several ways, stemming from the new method maintaining an approximation of the inverse Hessian, that is the underlying curvature of the design problem. Whereas the typical trust region approach performs a full sub-optimization using the low-fidelity model at every iteration, the new technique finds a suitable descent direction and focuses the search along it, reducing the number of low-fidelity evaluations required. This narrowing of the search domain also alleviates the burden on the surrogate model corrections between the low- and high-fidelity data. Rather than requiring the surrogate to be accurate in a hyper-volume bounded by the trust region, the model needs only to be accurate along the forward-looking search direction. Maintaining the approximate inverse Hessian also allows the multifidelity algorithm to revert to high-fidelity optimization at any time. In contrast, the standard approach has no memory of the previously-computed high-fidelity data. The primary disadvantage of the proposed algorithm is that it may require modifications to the optimization software, whereas standard optimizers may be used as black-box drivers in the typical trust region method. A multifidelity, multidisciplinary simulation of aeroelastic vehicle performance is developed to demonstrate the optimization method. The numerical physics models include body-fitted Euler computational fluid dynamics; linear, panel aerodynamics; linear, finite-element computational structural mechanics; and reduced, modal structural bases. A central element of the multifidelity, multidisciplinary framework is a shared parametric, attributed geometric representation that ensures the analysis inputs are consistent between disciplines and fidelities. The attributed geometry also enables the transfer of data between disciplines. The new optimization algorithm, a standard trust region approach, and a single-fidelity quasi-Newton method are compared for a series of analytic test functions, using both polynomial chaos expansions and kriging to correct discrepancies between fidelity levels of data. In the aggregate, the new method requires fewer high-fidelity evaluations than the trust region approach in 51% of cases, and the same number of evaluations in 18%. The new approach also requires fewer low-fidelity evaluations, by up to an order of magnitude, in almost all cases. The efficacy of both multifidelity methods compared to single-fidelity optimization depends significantly on the behavior of the high-fidelity model and the quality of the low-fidelity approximation, though savings are realized in a large number of cases. The multifidelity algorithm is also compared to the single-fidelity quasi-Newton method for complex aeroelastic simulations. The vehicle design problem includes variables for planform shape, structural sizing, and cruise condition with constraints on trim and structural stresses. Considering the objective function reduction versus computational expenditure, the multifidelity process performs better in three of four cases in early iterations. However, the enforcement of a contracting trust region slows the multifidelity progress. Even so, leveraging the approximate inverse Hessian, the optimization can be seamlessly continued using high-fidelity data alone. Ultimately, the proposed new algorithm produced better designs in all four cases. Investigating the return on investment in terms of design improvement per computational hour confirms that the multifidelity advantage is greatest in early iterations, and managing the transition to high-fidelity optimization is critical.

  16. Optimized up-down asymmetry to drive fast intrinsic rotation in tokamaks

    NASA Astrophysics Data System (ADS)

    Ball, Justin; Parra, Felix I.; Landreman, Matt; Barnes, Michael

    2018-02-01

    Breaking the up-down symmetry of the tokamak poloidal cross-section can significantly increase the spontaneous rotation due to turbulent momentum transport. In this work, we optimize the shape of flux surfaces with both tilted elongation and tilted triangularity in order to maximize this drive of intrinsic rotation. Nonlinear gyrokinetic simulations demonstrate that adding optimally-tilted triangularity can double the momentum transport of a tilted elliptical shape. This work indicates that tilting the elongation and triangularity in an ITER-like device can reduce the energy transport and drive intrinsic rotation with an Alfvén Mach number of roughly 1% . This rotation is four times larger than the rotation expected in ITER and is approximately what is needed to stabilize MHD instabilities. It is shown that this optimal shape can be created using the shaping coils of several present-day experiments.

  17. Real-Time Optimization for use in a Control Allocation System to Recover from Pilot Induced Oscillations

    NASA Technical Reports Server (NTRS)

    Leonard, Michael W.

    2013-01-01

    Integration of the Control Allocation technique to recover from Pilot Induced Oscillations (CAPIO) System into the control system of a Short Takeoff and Landing Mobility Concept Vehicle simulation presents a challenge because the CAPIO formulation requires that constrained optimization problems be solved at the controller operating frequency. We present a solution that utilizes a modified version of the well-known L-BFGS-B solver. Despite the iterative nature of the solver, the method is seen to converge in real time with sufficient reliability to support three weeks of piloted runs at the NASA Ames Vertical Motion Simulator (VMS) facility. The results of the optimization are seen to be excellent in the vast majority of real-time frames. Deficiencies in the quality of the results in some frames are shown to be improvable with simple termination criteria adjustments, though more real-time optimization iterations would be required.

  18. Discrete Logic Modelling Optimization to Contextualize Prior Knowledge Networks Using PRUNET

    PubMed Central

    Androsova, Ganna; del Sol, Antonio

    2015-01-01

    High-throughput technologies have led to the generation of an increasing amount of data in different areas of biology. Datasets capturing the cell’s response to its intra- and extra-cellular microenvironment allows such data to be incorporated as signed and directed graphs or influence networks. These prior knowledge networks (PKNs) represent our current knowledge of the causality of cellular signal transduction. New signalling data is often examined and interpreted in conjunction with PKNs. However, different biological contexts, such as cell type or disease states, may have distinct variants of signalling pathways, resulting in the misinterpretation of new data. The identification of inconsistencies between measured data and signalling topologies, as well as the training of PKNs using context specific datasets (PKN contextualization), are necessary conditions to construct reliable, predictive models, which are current challenges in the systems biology of cell signalling. Here we present PRUNET, a user-friendly software tool designed to address the contextualization of a PKNs to specific experimental conditions. As the input, the algorithm takes a PKN and the expression profile of two given stable steady states or cellular phenotypes. The PKN is iteratively pruned using an evolutionary algorithm to perform an optimization process. This optimization rests in a match between predicted attractors in a discrete logic model (Boolean) and a Booleanized representation of the phenotypes, within a population of alternative subnetworks that evolves iteratively. We validated the algorithm applying PRUNET to four biological examples and using the resulting contextualized networks to predict missing expression values and to simulate well-characterized perturbations. PRUNET constitutes a tool for the automatic curation of a PKN to make it suitable for describing biological processes under particular experimental conditions. The general applicability of the implemented algorithm makes PRUNET suitable for a variety of biological processes, for instance cellular reprogramming or transitions between healthy and disease states. PMID:26058016

  19. Description of the prototype diagnostic residual gas analyzer for ITER.

    PubMed

    Younkin, T R; Biewer, T M; Klepper, C C; Marcus, C

    2014-11-01

    The diagnostic residual gas analyzer (DRGA) system to be used during ITER tokamak operation is being designed at Oak Ridge National Laboratory to measure fuel ratios (deuterium and tritium), fusion ash (helium), and impurities in the plasma. The eventual purpose of this instrument is for machine protection, basic control, and physics on ITER. Prototyping is ongoing to optimize the hardware setup and measurement capabilities. The DRGA prototype is comprised of a vacuum system and measurement technologies that will overlap to meet ITER measurement requirements. Three technologies included in this diagnostic are a quadrupole mass spectrometer, an ion trap mass spectrometer, and an optical penning gauge that are designed to document relative and absolute gas concentrations.

  20. Diffractive elements for generating microscale laser beam patterns: a Y2K problem

    NASA Astrophysics Data System (ADS)

    Teiwes, Stephan; Krueger, Sven; Wernicke, Guenther K.; Ferstl, Margit

    2000-03-01

    Lasers are widely used in industrial fabrication for engraving, cutting and many other purposes. However, material processing at very small scales is still a matter of concern. Advances in diffractive optics could provide for laser systems that could be used for engraving or cutting of micro-scale patterns at high speeds. In our paper we focus on the design of diffractive elements which can be used for this special application. It is a common desire in material processing to apply 'discrete' as well as 'continuous' beam patterns. Especially, the latter case is difficult to handle as typical micro-scale patterns are characterized by bad band-limitation properties, and as speckles can easily occur in beam patterns. It is shown in this paper that a standard iterative design method usually fails to obtain diffractive elements that generate diffraction patterns with acceptable quality. Insights gained from an analysis of the design problems are used to optimize the iterative design method. We demonstrate applicability and success of our approach by the design of diffractive phase elements that generate a discrete and a continuous 'Y2K' pattern.

  1. Inter-hospital communications and transport: turning one-way funnels into two-way networks.

    PubMed

    Rokos, Ivan C; Sanddal, Nels D; Pancioli, Arthur M; Wolff, Catherine; Gaieski, David F

    2010-12-01

    The Inter-hospital Communications and Transport workgroup was charged with exploring the current status, barriers, and data necessary to optimize the initial destination and subsequent transfer of patients between and among acute care settings. The subtitle, "Turning Funnels Into Two-way Networks," is descriptive of the approach that the workgroup took by exploring how and when smaller facilities in suburban, rural, and frontier areas can contribute to the daily business of caring for emergency patients across the lower-acuity spectrum-in some instances with consultant support from academic medical centers. It also focused on the need to identify high-acuity patients and expedite triage and transfer of those patients to facilities with specialty resources. Draft research recommendations were developed through an iterative writing process and presented to a breakout session of Academic Emergency Medicine's 2010 consensus conference, "Beyond Regionalization: Integrated Networks of Emergency Care." Priority research areas were determined by informal consensus of the breakout group. A subsequent iterative writing process was undertaken to complete this article. A number of broad research questions are presented. 2010 by the Society for Academic Emergency Medicine.

  2. Analytical Models of Cross-Layer Protocol Optimization in Real-Time Wireless Sensor Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    The real-time interactions among the nodes of a wireless sensor network (WSN) to cooperatively process data from multiple sensors are modeled. Quality-of-service (QoS) metrics are associated with the quality of fused information: throughput, delay, packet error rate, etc. Multivariate point process (MVPP) models of discrete random events in WSNs establish stochastic characteristics of optimal cross-layer protocols. Discrete-event, cross-layer interactions in mobile ad hoc network (MANET) protocols have been modeled using a set of concatenated design parameters and associated resource levels by the MVPPs. Characterization of the "best" cross-layer designs for a MANET is formulated by applying the general theory of martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters and controlled through conditional rates of the MVPPs. Modeling limitations to determination of closed-form solutions versus explicit iterative solutions for ad hoc WSN controls are examined.

  3. Concurrent design of quasi-random photonic nanostructures

    PubMed Central

    Lee, Won-Kyu; Yu, Shuangcheng; Engel, Clifford J.; Reese, Thaddeus; Rhee, Dongjoon; Chen, Wei

    2017-01-01

    Nanostructured surfaces with quasi-random geometries can manipulate light over broadband wavelengths and wide ranges of angles. Optimization and realization of stochastic patterns have typically relied on serial, direct-write fabrication methods combined with real-space design. However, this approach is not suitable for customizable features or scalable nanomanufacturing. Moreover, trial-and-error processing cannot guarantee fabrication feasibility because processing–structure relations are not included in conventional designs. Here, we report wrinkle lithography integrated with concurrent design to produce quasi-random nanostructures in amorphous silicon at wafer scales that achieved over 160% light absorption enhancement from 800 to 1,200 nm. The quasi-periodicity of patterns, materials filling ratio, and feature depths could be independently controlled. We statistically represented the quasi-random patterns by Fourier spectral density functions (SDFs) that could bridge the processing–structure and structure–performance relations. Iterative search of the optimal structure via the SDF representation enabled concurrent design of nanostructures and processing. PMID:28760975

  4. Nozzle Numerical Analysis Of The Scimitar Engine

    NASA Astrophysics Data System (ADS)

    Battista, F.; Marini, M.; Cutrone, L.

    2011-05-01

    This work describes part of the activities on the LAPCAT-II A2 vehicle, in which starting from the available conceptual vehicle design and the related pre- cooled turbo-ramjet engine called SCIMITAR, well- thought assumptions made for performance figures of different components during the iteration process within LAPCAT-I will be assessed in more detail. In this paper it is presented a numerical analysis aimed at the design optimization of the nozzle contour of the LAPCAT A2 SCIMITAR engine designed by Reaction Engines Ltd. (REL) (see Figure 1). In particular, nozzle shape optimization process is presented for cruise conditions. All the computations have been carried out by using the CIRA C3NS code in non equilibrium conditions. The effect of considering detailed or reduced chemical kinetic schemes has been analyzed with a particular focus on the production of pollutants. An analysis of engine performance parameters, such as thrust and combustion efficiency has been carried out.

  5. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  6. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  7. Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.

    2016-10-01

    With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.

  8. Calibration of stereo rigs based on the backward projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin

    2016-08-01

    High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.

  9. Topology optimization analysis based on the direct coupling of the boundary element method and the level set method

    NASA Astrophysics Data System (ADS)

    Vitório, Paulo Cezar; Leonel, Edson Denner

    2017-12-01

    The structural design must ensure suitable working conditions by attending for safe and economic criteria. However, the optimal solution is not easily available, because these conditions depend on the bodies' dimensions, materials strength and structural system configuration. In this regard, topology optimization aims for achieving the optimal structural geometry, i.e. the shape that leads to the minimum requirement of material, respecting constraints related to the stress state at each material point. The present study applies an evolutionary approach for determining the optimal geometry of 2D structures using the coupling of the boundary element method (BEM) and the level set method (LSM). The proposed algorithm consists of mechanical modelling, topology optimization approach and structural reconstruction. The mechanical model is composed of singular and hyper-singular BEM algebraic equations. The topology optimization is performed through the LSM. Internal and external geometries are evolved by the LS function evaluated at its zero level. The reconstruction process concerns the remeshing. Because the structural boundary moves at each iteration, the body's geometry change and, consequently, a new mesh has to be defined. The proposed algorithm, which is based on the direct coupling of such approaches, introduces internal cavities automatically during the optimization process, according to the intensity of Von Mises stress. The developed optimization model was applied in two benchmarks available in the literature. Good agreement was observed among the results, which demonstrates its efficiency and accuracy.

  10. Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms

    NASA Astrophysics Data System (ADS)

    Mohan, K. Aditya

    2017-10-01

    4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.

  11. An EGO-like optimization framework for sensor placement optimization in modal analysis

    NASA Astrophysics Data System (ADS)

    Morlier, Joseph; Basile, Aniello; Chiplunkar, Ankit; Charlotte, Miguel

    2018-07-01

    In aircraft design, ground/flight vibration tests are conducted to extract aircraft’s modal parameters (natural frequencies, damping ratios and mode shapes) also known as the modal basis. The main problem in aircraft modal identification is the large number of sensors needed, which increases operational time and costs. The goal of this paper is to minimize the number of sensors by optimizing their locations in order to reconstruct a truncated modal basis of N mode shapes with a high level of accuracy in the reconstruction. There are several methods to solve sensors placement optimization (SPO) problems, but for this case an original approach has been established based on an iterative process for mode shapes reconstruction through an adaptive Kriging metamodeling approach so called efficient global optimization (EGO)-SPO. The main idea in this publication is to solve an optimization problem where the sensors locations are variables and the objective function is defined by maximizing the trace of criteria so called AutoMAC. The results on a 2D wing demonstrate a reduction of sensors by 30% using our EGO-SPO strategy.

  12. Modal Test/Analysis Correlation of Space Station Structures Using Nonlinear Sensitivity

    NASA Technical Reports Server (NTRS)

    Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan

    1992-01-01

    The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlation. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.

  13. Modal test/analysis correlation of Space Station structures using nonlinear sensitivity

    NASA Technical Reports Server (NTRS)

    Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan

    1992-01-01

    The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlations. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.

  14. Area- and energy-efficient CORDIC accelerators in deep sub-micron CMOS technologies

    NASA Astrophysics Data System (ADS)

    Vishnoi, U.; Noll, T. G.

    2012-09-01

    The COordinate Rotate DIgital Computer (CORDIC) algorithm is a well known versatile approach and is widely applied in today's SoCs for especially but not restricted to digital communications. Dedicated CORDIC blocks can be implemented in deep sub-micron CMOS technologies at very low area and energy costs and are attractive to be used as hardware accelerators for Application Specific Instruction Processors (ASIPs). Thereby, overcoming the well known energy vs. flexibility conflict. Optimizing Global Navigation Satellite System (GNSS) receivers to reduce the hardware complexity is an important research topic at present. In such receivers CORDIC accelerators can be used for digital baseband processing (fixed-point) and in Position-Velocity-Time estimation (floating-point). A micro architecture well suited to such applications is presented. This architecture is parameterized according to the wordlengths as well as the number of iterations and can be easily extended for floating point data format. Moreover, area can be traded for throughput by partially or even fully unrolling the iterations, whereby the degree of pipelining is organized with one CORDIC iteration per cycle. From the architectural description, the macro layout can be generated fully automatically using an in-house datapath generator tool. Since the adders and shifters play an important role in optimizing the CORDIC block, they must be carefully optimized for high area and energy efficiency in the underlying technology. So, for this purpose carry-select adders and logarithmic shifters have been chosen. Device dimensioning was automatically optimized with respect to dynamic and static power, area and performance using the in-house tool. The fully sequential CORDIC block for fixed-point digital baseband processing features a wordlength of 16 bits, requires 5232 transistors, which is implemented in a 40-nm CMOS technology and occupies a silicon area of 1560 μm2 only. Maximum clock frequency from circuit simulation of extracted netlist is 768 MHz under typical, and 463 MHz under worst case technology and application corner conditions, respectively. Simulated dynamic power dissipation is 0.24 uW MHz-1 at 0.9 V; static power is 38 uW in slow corner, 65 uW in typical corner and 518 uW in fast corner, respectively. The latter can be reduced by 43% in a 40-nm CMOS technology using 0.5 V reverse-backbias. These features are compared with the results from different design styles as well as with an implementation in 28-nm CMOS technology. It is interesting that in the latter case area scales as expected, but worst case performance and energy do not scale well anymore.

  15. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  16. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  17. Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

    PubMed

    Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon

    2012-01-01

    We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Efficient global fiber tracking on multi-dimensional diffusion direction maps

    NASA Astrophysics Data System (ADS)

    Klein, Jan; Köhler, Benjamin; Hahn, Horst K.

    2012-02-01

    Global fiber tracking algorithms have recently been proposed which were able to compute results of unprecedented quality. They account for avoiding accumulation errors by a global optimization process at the cost of a high computation time of several hours or even days. In this paper, we introduce a novel global fiber tracking algorithm which, for the first time, globally optimizes the underlying diffusion direction map obtained from DTI or HARDI data, instead of single fiber segments. As a consequence, the number of iterations in the optimization process can drastically be reduced by about three orders of magnitude. Furthermore, in contrast to all previous algorithms, the density of the tracked fibers can be adjusted after the optimization within a few seconds. We evaluated our method for diffusion-weighted images obtained from software phantoms, healthy volunteers, and tumor patients. We show that difficult fiber bundles, e.g., the visual pathways or tracts for different motor functions can be determined and separated in an excellent quality. Furthermore, crossing and kissing bundles are correctly resolved. On current standard hardware, a dense fiber tracking result of a whole brain can be determined in less than half an hour which is a strong improvement compared to previous work.

  19. The impact of short-term stochastic variability in solar irradiance on optimal microgrid design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schittekatte, Tim; Stadler, Michael; Cardoso, Gonçalo

    2016-07-01

    This paper proposes a new methodology to capture the impact of fast moving clouds on utility power demand charges observed in microgrids with photovoltaic (PV) arrays, generators, and electrochemical energy storage. It consists of a statistical approach to introduce sub-hourly events in the hourly economic accounting process. The methodology is implemented in the Distributed Energy Resources Customer Adoption Model (DER-CAM), a state of the art mixed integer linear model used to optimally size DER in decentralized energy systems. Results suggest that previous iterations of DER-CAM could undersize battery capacities. The improved model depicts more accurately the economic value of PVmore » as well as the synergistic benefits of pairing PV with storage.« less

  20. ITER Central Solenoid Module Fabrication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, John

    The fabrication of the modules for the ITER Central Solenoid (CS) has started in a dedicated production facility located in Poway, California, USA. The necessary tools have been designed, built, installed, and tested in the facility to enable the start of production. The current schedule has first module fabrication completed in 2017, followed by testing and subsequent shipment to ITER. The Central Solenoid is a key component of the ITER tokamak providing the inductive voltage to initiate and sustain the plasma current and to position and shape the plasma. The design of the CS has been a collaborative effort betweenmore » the US ITER Project Office (US ITER), the international ITER Organization (IO) and General Atomics (GA). GA’s responsibility includes: completing the fabrication design, developing and qualifying the fabrication processes and tools, and then completing the fabrication of the seven 110 tonne CS modules. The modules will be shipped separately to the ITER site, and then stacked and aligned in the Assembly Hall prior to insertion in the core of the ITER tokamak. A dedicated facility in Poway, California, USA has been established by GA to complete the fabrication of the seven modules. Infrastructure improvements included thick reinforced concrete floors, a diesel generator for backup power, along with, cranes for moving the tooling within the facility. The fabrication process for a single module requires approximately 22 months followed by five months of testing, which includes preliminary electrical testing followed by high current (48.5 kA) tests at 4.7K. The production of the seven modules is completed in a parallel fashion through ten process stations. The process stations have been designed and built with most stations having completed testing and qualification for carrying out the required fabrication processes. The final qualification step for each process station is achieved by the successful production of a prototype coil. Fabrication of the first ITER module is in progress. The seven modules will be individually shipped to Cadarache, France upon their completion. This paper describes the processes and status of the fabrication of the CS Modules for ITER.« less

  1. Exact exchange potential evaluated from occupied Kohn-Sham and Hartree-Fock solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinal, M.; Holas, A.

    2011-06-15

    The reported algorithm determines the exact exchange potential v{sub x} in an iterative way using energy shifts (ESs) and orbital shifts (OSs) obtained with finite-difference formulas from the solutions (occupied orbitals and their energies) of the Hartree-Fock-like equation and the Kohn-Sham-like equation, the former used for the initial approximation to v{sub x} and the latter for increments of ES and OS due to subsequent changes of v{sub x}. Thus, the need for solution of the differential equations for OSs, used by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)], is bypassed. The iterated exchange potential, expressed in terms ofmore » ESs and OSs, is improved by modifying ESs at odd iteration steps and OSs at even steps. The modification formulas are related to the optimized-effective-potential equation (satisfied at convergence) written as the condition of vanishing density shift (DS). They are obtained, respectively, by enforcing its satisfaction through corrections to approximate OSs and by determining the optimal ESs that minimize the DS norm. The proposed method, successfully tested for several closed-(sub)shell atoms, from Be to Kr, within the density functional theory exchange-only approximation, proves highly efficient. The calculations using the pseudospectral method for representing orbitals give iterative sequences of approximate exchange potentials (starting with the Krieger-Li-Iafrate approximation) that rapidly approach the exact v{sub x} so that, for Ne, Ar, and Zn, the corresponding DS norm becomes less than 10{sup -6} after 13, 13, and 9 iteration steps for a given electron density. In self-consistent density calculations, orbital energies of 10{sup -4} hartree accuracy are obtained for these atoms after, respectively, 9, 12, and 12 density iteration steps, each involving just two steps of v{sub x} iteration, while the accuracy limit of 10{sup -6} to 10{sup -7} hartree is reached after 20 density iterations.« less

  2. Exact exchange potential evaluated from occupied Kohn-Sham and Hartree-Fock solutions

    NASA Astrophysics Data System (ADS)

    Cinal, M.; Holas, A.

    2011-06-01

    The reported algorithm determines the exact exchange potential vx in an iterative way using energy shifts (ESs) and orbital shifts (OSs) obtained with finite-difference formulas from the solutions (occupied orbitals and their energies) of the Hartree-Fock-like equation and the Kohn-Sham-like equation, the former used for the initial approximation to vx and the latter for increments of ES and OS due to subsequent changes of vx. Thus, the need for solution of the differential equations for OSs, used by Kümmel and Perdew [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.90.043004 90, 043004 (2003)], is bypassed. The iterated exchange potential, expressed in terms of ESs and OSs, is improved by modifying ESs at odd iteration steps and OSs at even steps. The modification formulas are related to the optimized-effective-potential equation (satisfied at convergence) written as the condition of vanishing density shift (DS). They are obtained, respectively, by enforcing its satisfaction through corrections to approximate OSs and by determining the optimal ESs that minimize the DS norm. The proposed method, successfully tested for several closed-(sub)shell atoms, from Be to Kr, within the density functional theory exchange-only approximation, proves highly efficient. The calculations using the pseudospectral method for representing orbitals give iterative sequences of approximate exchange potentials (starting with the Krieger-Li-Iafrate approximation) that rapidly approach the exact vx so that, for Ne, Ar, and Zn, the corresponding DS norm becomes less than 10-6 after 13, 13, and 9 iteration steps for a given electron density. In self-consistent density calculations, orbital energies of 10-4 hartree accuracy are obtained for these atoms after, respectively, 9, 12, and 12 density iteration steps, each involving just two steps of vx iteration, while the accuracy limit of 10-6 to 10-7 hartree is reached after 20 density iterations.

  3. Development of a Platform for Simulating and Optimizing Thermoelectric Energy Systems

    NASA Astrophysics Data System (ADS)

    Kreuder, John J.

    Thermoelectrics are solid state devices that can convert thermal energy directly into electrical energy. They have historically been used only in niche applications because of their relatively low efficiencies. With the advent of nanotechnology and improved manufacturing processes thermoelectric materials have become less costly and more efficient As next generation thermoelectric materials become available there is a need for industries to quickly and cost effectively seek out feasible applications for thermoelectric heat recovery platforms. Determining the technical and economic feasibility of such systems requires a model that predicts performance at the system level. Current models focus on specific system applications or neglect the rest of the system altogether, focusing on only module design and not an entire energy system. To assist in screening and optimizing entire energy systems using thermoelectrics, a novel software tool, Thermoelectric Power System Simulator (TEPSS), is developed for system level simulation and optimization of heat recovery systems. The platform is designed for use with a generic energy system so that most types of thermoelectric heat recovery applications can be modeled. TEPSS is based on object-oriented programming in MATLABRTM. A modular, shell based architecture is developed to carry out concept generation, system simulation and optimization. Systems are defined according to the components and interconnectivity specified by the user. An iterative solution process based on Newton's Method is employed to determine the system's steady state so that an objective function representing the cost of the system can be evaluated at the operating point. An optimization algorithm from MATLAB's Optimization Toolbox uses sequential quadratic programming to minimize this objective function with respect to a set of user specified design variables and constraints. During this iterative process many independent system simulations are executed and the optimal operating condition of the system is determined. A comprehensive guide to using the software platform is included. TEPSS is intended to be expandable so that users can add new types of components and implement component models with an adequate degree of complexity for a required application. Special steps are taken to ensure that the system of nonlinear algebraic equations presented in the system engineering model is square and that all equations are independent. In addition, the third party program FluidProp is leveraged to allow for simulations of systems with a range of fluids. Sequential unconstrained minimization techniques are used to prevent physical variables like pressure and temperature from trending to infinity during optimization. Two case studies are performed to verify and demonstrate the simulation and optimization routines employed by TEPSS. The first is of a simple combined cycle in which the size of the heat exchanger and fuel rate are optimized. The second case study is the optimization of geometric parameters of a thermoelectric heat recovery platform in a regenerative Brayton Cycle. A basic package of components and interconnections are verified and provided as well.

  4. SRS modeling in high power CW fiber lasers for component optimization

    NASA Astrophysics Data System (ADS)

    Brochu, G.; Villeneuve, A.; Faucher, M.; Morin, M.; Trépanier, F.; Dionne, R.

    2017-02-01

    A CW kilowatt fiber laser numerical model has been developed taking into account intracavity stimulated Raman scattering (SRS). It uses the split-step Fourier method which is applied iteratively over several cavity round trips. The gain distribution is re-evaluated after each iteration with a standard CW model using an effective FBG reflectivity that quantifies the non-linear spectral leakage. This model explains why spectrally narrow output couplers produce more SRS than wider FBGs, as recently reported by other authors, and constitute a powerful tool to design optimized and innovative fiber components to push back the onset of SRS for a given fiber core diameter.

  5. Simple iterative construction of the optimized effective potential for orbital functionals, including exact exchange.

    PubMed

    Kümmel, Stephan; Perdew, John P

    2003-01-31

    For exchange-correlation functionals that depend explicitly on the Kohn-Sham orbitals, the potential V(xcsigma)(r) must be obtained as the solution of the optimized effective potential (OEP) integral equation. This is very demanding and has limited the use of orbital functionals. We demonstrate that instead the OEP can be obtained iteratively by solving the partial differential equations for the orbital shifts that exactify the Krieger-Li-Iafrate approximation. Unoccupied orbitals do not need to be calculated. Accuracy and efficiency of the method are shown for atoms and clusters using the exact-exchange energy. Counterintuitive asymptotic limits of the exact OEP are presented.

  6. Optimization of the spatial resolution for the GE discovery PET/CT 710 by using NEMA NU 2-2007 standards

    NASA Astrophysics Data System (ADS)

    Yoon, Hyun Jin; Jeong, Young Jin; Son, Hye Joo; Kang, Do-Young; Hyun, Kyung-Yae; Lee, Min-Kyung

    2015-01-01

    The spatial resolution in positron emission tomography (PET) is fundamentally limited by the geometry of the detector element, the positron's recombination range with electrons, the acollinearity of the positron, the crystal decoding error, the penetration into the detector ring, and the reconstruction algorithms. In this paper, optimized parameters are suggested to produce high-resolution PET images by using an iterative reconstruction algorithm. A phantom with three point sources structured with three capillary tubes was prepared with an axial extension of less than 1 mm and was filled with 18F-fluorodeoxyglucose (18F-FDG) with concentrations above 200 MBq/cc. The performance measures of all the PET images were acquired according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standards procedures. The parameters for the iterative reconstruction were adjusted around the values recommended by General Electric GE, and the optimized values of the spatial resolution and the full width at half maximum (FWHM) or the full width at tenth of maximum (FWTM) values were found for the best PET resolution. The axial and the transverse spatial resolutions, according to the filtered back-projection (FBP) at 1 cm off-axis, were 4.81 and 4.48 mm, respectively. The axial and the transaxial spatial resolutions at 10 cm off-axis were 5.63 mm and 5.08 mm, respectively, and the trans-axial resolution at 10 cm was evaluated as the average of the radial and the tangential measurements. The recommended optimized parameters of the spatial resolution according to the NEMA phantom for the number of subsets, the number of iterations, and the Gaussian post-filter are 12, 3, and 3 mm for the iterative reconstruction VUE Point HD without the SharpIR algorithm (HD), and 12, 12, and 5.2 mm with SharpIR (HD.S), respectively, according to the Advantage Workstation Volume Share 5 (AW4.6). The performance measurements for the GE Discovery PET/CT 710 using the NEMA NU 2-2007 standards from our results will be helpful in the quantitative analysis of PET scanner images. The spatial resolution was modified more by using an improved algorithm such as HD.S, than by using HD and FBP. The use of the optimized parameters for iterative reconstructions is strongly recommended for qualitative images from the GE Discovery PET/CT 710 scanner.

  7. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less

  8. A Fast Optimization Method for General Binary Code Learning.

    PubMed

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  9. Optimal Binarization of Gray-Scaled Digital Images via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A. (Inventor); Klinko, Steven J. (Inventor)

    2007-01-01

    A technique for finding an optimal threshold for binarization of a gray scale image employs fuzzy reasoning. A triangular membership function is employed which is dependent on the degree to which the pixels in the image belong to either the foreground class or the background class. Use of a simplified linear fuzzy entropy factor function facilitates short execution times and use of membership values between 0.0 and 1.0 for improved accuracy. To improve accuracy further, the membership function employs lower and upper bound gray level limits that can vary from image to image and are selected to be equal to the minimum and the maximum gray levels, respectively, that are present in the image to be converted. To identify the optimal binarization threshold, an iterative process is employed in which different possible thresholds are tested and the one providing the minimum fuzzy entropy measure is selected.

  10. ITER-FEAT operation

    NASA Astrophysics Data System (ADS)

    Shimomura, Y.; Aymar, R.; Chuyanov, V. A.; Huguet, M.; Matsumoto, H.; Mizoguchi, T.; Murakami, Y.; Polevoi, A. R.; Shimada, M.; ITER Joint Central Team; ITER Home Teams

    2001-03-01

    ITER is planned to be the first fusion experimental reactor in the world operating for research in physics and engineering. The first ten years of operation will be devoted primarily to physics issues at low neutron fluence and the following ten years of operation to engineering testing at higher fluence. ITER can accommodate various plasma configurations and plasma operation modes, such as inductive high Q modes, long pulse hybrid modes and non-inductive steady state modes, with large ranges of plasma current, density, beta and fusion power, and with various heating and current drive methods. This flexibility will provide an advantage for coping with uncertainties in the physics database, in studying burning plasmas, in introducing advanced features and in optimizing the plasma performance for the different programme objectives. Remote sites will be able to participate in the ITER experiment. This concept will provide an advantage not only in operating ITER for 24 hours a day but also in involving the worldwide fusion community and in promoting scientific competition among the ITER Parties.

  11. Image simulation for automatic license plate recognition

    NASA Astrophysics Data System (ADS)

    Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José

    2012-01-01

    Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.

  12. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  13. Optimized growth and reorientation of anisotropic material based on evolution equations

    NASA Astrophysics Data System (ADS)

    Jantos, Dustin R.; Junker, Philipp; Hackl, Klaus

    2018-07-01

    Modern high-performance materials have inherent anisotropic elastic properties. The local material orientation can thus be considered to be an additional design variable for the topology optimization of structures containing such materials. In our previous work, we introduced a variational growth approach to topology optimization for isotropic, linear-elastic materials. We solved the optimization problem purely by application of Hamilton's principle. In this way, we were able to determine an evolution equation for the spatial distribution of density mass, which can be evaluated in an iterative process within a solitary finite element environment. We now add the local material orientation described by a set of three Euler angles as additional design variables into the three-dimensional model. This leads to three additional evolution equations that can be separately evaluated for each (material) point. Thus, no additional field unknown within the finite element approach is needed, and the evolution of the spatial distribution of density mass and the evolution of the Euler angles can be evaluated simultaneously.

  14. Query optimization for graph analytics on linked data using SPARQL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Seokyong; Lee, Sangkeun; Lim, Seung -Hwan

    2015-07-01

    Triplestores that support query languages such as SPARQL are emerging as the preferred and scalable solution to represent data and meta-data as massive heterogeneous graphs using Semantic Web standards. With increasing adoption, the desire to conduct graph-theoretic mining and exploratory analysis has also increased. Addressing that desire, this paper presents a solution that is the marriage of Graph Theory and the Semantic Web. We present software that can analyze Linked Data using graph operations such as counting triangles, finding eccentricity, testing connectedness, and computing PageRank directly on triple stores via the SPARQL interface. We describe the process of optimizing performancemore » of the SPARQL-based implementation of such popular graph algorithms by reducing the space-overhead, simplifying iterative complexity and removing redundant computations by understanding query plans. Our optimized approach shows significant performance gains on triplestores hosted on stand-alone workstations as well as hardware-optimized scalable supercomputers such as the Cray XMT.« less

  15. Kernel approach to molecular similarity based on iterative graph similarity.

    PubMed

    Rupp, Matthias; Proschak, Ewgenij; Schneider, Gisbert

    2007-01-01

    Similarity measures for molecules are of basic importance in chemical, biological, and pharmaceutical applications. We introduce a molecular similarity measure defined directly on the annotated molecular graph, based on iterative graph similarity and optimal assignments. We give an iterative algorithm for the computation of the proposed molecular similarity measure, prove its convergence and the uniqueness of the solution, and provide an upper bound on the required number of iterations necessary to achieve a desired precision. Empirical evidence for the positive semidefiniteness of certain parametrizations of our function is presented. We evaluated our molecular similarity measure by using it as a kernel in support vector machine classification and regression applied to several pharmaceutical and toxicological data sets, with encouraging results.

  16. The modification of hybrid method of ant colony optimization, particle swarm optimization and 3-OPT algorithm in traveling salesman problem

    NASA Astrophysics Data System (ADS)

    Hertono, G. F.; Ubadah; Handari, B. D.

    2018-03-01

    The traveling salesman problem (TSP) is a famous problem in finding the shortest tour to visit every vertex exactly once, except the first vertex, given a set of vertices. This paper discusses three modification methods to solve TSP by combining Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO) and 3-Opt Algorithm. The ACO is used to find the solution of TSP, in which the PSO is implemented to find the best value of parameters α and β that are used in ACO.In order to reduce the total of tour length from the feasible solution obtained by ACO, then the 3-Opt will be used. In the first modification, the 3-Opt is used to reduce the total tour length from the feasible solutions obtained at each iteration, meanwhile, as the second modification, 3-Opt is used to reduce the total tour length from the entire solution obtained at every iteration. In the third modification, 3-Opt is used to reduce the total tour length from different solutions obtained at each iteration. Results are tested using 6 benchmark problems taken from TSPLIB by calculating the relative error to the best known solution as well as the running time. Among those modifications, only the second and third modification give satisfactory results except the second one needs more execution time compare to the third modifications.

  17. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    NASA Astrophysics Data System (ADS)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  18. A methodology for finding the optimal iteration number of the SIRT algorithm for quantitative Electron Tomography.

    PubMed

    Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen

    2017-02-01

    The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Optimizing density patterns to achieve desired light extraction for displays

    NASA Astrophysics Data System (ADS)

    Davenport, T. L. R.; Cassarly, W. J.

    2007-01-01

    In displays such as backlights and signage, it is often desirable to produce a particular spatial luminance distribution of light. This work demonstrates an iterative optimization technique for determining the density of light extractors required to produce desired luminance distributions.

  20. Cupola Furnace Computer Process Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloymore » elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).« less

  1. Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2002-01-01

    This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.

  2. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  3. LOW-ENGINE-FRICTION TECHNOLOGY FOR ADVANCED NATURAL-GAS RECIPROCATING ENGINES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Victor W. Wong; Tian Tian; Grant Smedley

    2003-08-28

    This program aims at improving the efficiency of advanced natural-gas reciprocating engines (ANGRE) by reducing piston/ring assembly friction without major adverse effects on engine performance, such as increased oil consumption and emissions. A detailed set of piston/ring dynamic and friction models have been developed and applied that illustrated the fundamental relationships between design parameters and friction losses. Various low-friction strategies and concepts have been explored, and engine experiments will validate these concepts. An iterative process of experimentation, simulation and analysis, will be followed with the goal of demonstrating a complete optimized low-friction engine system. As planned, MIT has developed guidelinesmore » for an initial set of low-friction piston-ring-pack designs. Current recommendations focus on subtle top-piston-ring and oil-control-ring characteristics. A full-scale Waukesha F18 engine has been installed at Colorado State University and testing of the baseline configuration is in progress. Components for the first design iteration are being procured. Subsequent work includes examining the friction and engine performance data and extending the analyses to other areas to evaluate opportunities for further friction improvement and the impact on oil consumption/emission and wear, towards demonstrating an optimized reduced-friction engine system.« less

  4. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  5. Diagnostics of the ITER neutral beam test facility.

    PubMed

    Pasqualotto, R; Serianni, G; Sonato, P; Agostini, M; Brombin, M; Croci, G; Dalla Palma, M; De Muri, M; Gazza, E; Gorini, G; Pomaro, N; Rizzolo, A; Spolaore, M; Zaniol, B

    2012-02-01

    The ITER heating neutral beam (HNB) injector, based on negative ions accelerated at 1 MV, will be tested and optimized in the SPIDER source and MITICA full injector prototypes, using a set of diagnostics not available on the ITER HNB. The RF source, where the H(-)∕D(-) production is enhanced by cesium evaporation, will be monitored with thermocouples, electrostatic probes, optical emission spectroscopy, cavity ring down, and laser absorption spectroscopy. The beam is analyzed by cooling water calorimetry, a short pulse instrumented calorimeter, beam emission spectroscopy, visible tomography, and neutron imaging. Design of the diagnostic systems is presented.

  6. The ITER bolometer diagnostic: Status and plansa)

    NASA Astrophysics Data System (ADS)

    Meister, H.; Giannone, L.; Horton, L. D.; Raupp, G.; Zeidner, W.; Grunda, G.; Kalvin, S.; Fischer, U.; Serikov, A.; Stickel, S.; Reichle, R.

    2008-10-01

    A consortium consisting of four EURATOM Associations has been set up to develop the project plan for the full development of the ITER bolometer diagnostic and to continue urgent R&D activities. An overview of the current status is given, including detector development, line-of-sight optimization, performance analysis as well as the design of the diagnostic components and their integration in ITER. This is complemented by the presentation of plans for future activities required to successfully implement the bolometer diagnostic, ranging from the detector development over diagnostic design and prototype testing to RH tools for calibration.

  7. Model-based optimization of near-field binary-pixelated beam shapers

    DOE PAGES

    Dorrer, C.; Hassett, J.

    2017-01-23

    The optimization of components that rely on spatially dithered distributions of transparent or opaque pixels and an imaging system with far-field filtering for transmission control is demonstrated. The binary-pixel distribution can be iteratively optimized to lower an error function that takes into account the design transmission and the characteristics of the required far-field filter. Simulations using a design transmission chosen in the context of high-energy lasers show that the beam-fluence modulation at an image plane can be reduced by a factor of 2, leading to performance similar to using a non-optimized spatial-dithering algorithm with pixels of size reduced by amore » factor of 2 without the additional fabrication complexity or cost. The optimization process preserves the pixel distribution statistical properties. Analysis shows that the optimized pixel distribution starting from a high-noise distribution defined by a random-draw algorithm should be more resilient to fabrication errors than the optimized pixel distributions starting from a low-noise, error-diffusion algorithm, while leading to similar beamshaping performance. Furthermore, this is confirmed by experimental results obtained with various pixel distributions and induced fabrication errors.« less

  8. Subwavelength elastic joints connecting torsional waveguides to maximize the power transmission coefficient

    NASA Astrophysics Data System (ADS)

    Lee, Joong Seok; Lee, Il Kyu; Seung, Hong Min; Lee, Jun Kyu; Kim, Yoon Young

    2017-03-01

    Joints with slowly varying tapered shapes, such as linear or exponential profiles, are known to transmit incident wave power efficiently between two waveguides with dissimilar impedances. This statement is valid only when the considered joint length is longer than the wavelengths of the incident waves. When the joint length is shorter than the wavelengths, however, appropriate shapes of such subwavelength joints for efficient power transmission have not been explored much. In this work, considering one-dimensional torsional wave motion in a cylindrical elastic waveguide system, optimal shapes or radial profiles of a subwavelength joint maximizing the power transmission coefficient are designed by a gradient-based optimization formulation. The joint is divided into a number of thin disk elements using the transfer matrix approach and optimal radii of the disks are determined by iterative shape optimization processes for several single or bands of wavenumbers. Due to the subwavelength constraint, the optimized joint profiles were found to be considerably different from the slowly varying tapered shapes. Specifically, for bands of wavenumbers, peculiar gourd-like shapes were obtained as optimal shapes to maximize the power transmission coefficient. Numerical results from the proposed optimization formulation were also experimentally realized to verify the validity of the present designs.

  9. Intelligent Distributed Systems

    DTIC Science & Technology

    2015-10-23

    periodic gossiping algorithms by using convex combination rules rather than standard averaging rules. On a ring graph, we have discovered how to sequence...the gossips within a period to achieve the best possible convergence rate and we have related this optimal value to the classic edge coloring problem...consensus. There are three different approaches to distributed averaging: linear iterations, gossiping , and dou- ble linear iterations which are also known as

  10. Multidisciplinary optimization of an HSCT wing using a response surface methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giunta, A.A.; Grossman, B.; Mason, W.H.

    1994-12-31

    Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less

  11. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.

  12. Combined iterative reconstruction and image-domain decomposition for dual energy CT using total-variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu

    2014-05-15

    Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less

  13. The Iterative Design Process in Research and Development: A Work Experience Paper

    NASA Technical Reports Server (NTRS)

    Sullivan, George F. III

    2013-01-01

    The iterative design process is one of many strategies used in new product development. Top-down development strategies, like waterfall development, place a heavy emphasis on planning and simulation. The iterative process, on the other hand, is better suited to the management of small to medium scale projects. Over the past four months, I have worked with engineers at Johnson Space Center on a multitude of electronics projects. By describing the work I have done these last few months, analyzing the factors that have driven design decisions, and examining the testing and verification process, I will demonstrate that iterative design is the obvious choice for research and development projects.

  14. Global Optimal Trajectory in Chaos and NP-Hardness

    NASA Astrophysics Data System (ADS)

    Latorre, Vittorio; Gao, David Yang

    This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.

  15. An hp symplectic pseudospectral method for nonlinear optimal control

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong

    2017-01-01

    An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.

  16. Sparse time-frequency decomposition based on dictionary adaptation.

    PubMed

    Hou, Thomas Y; Shi, Zuoqiang

    2016-04-13

    In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).

  17. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

    NASA Astrophysics Data System (ADS)

    Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2006-12-01

    In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.

  18. 2D and 3D registration methods for dual-energy contrast-enhanced digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lau, Kristen C.; Roth, Susan; Maidment, Andrew D. A.

    2014-03-01

    Contrast-enhanced digital breast tomosynthesis (CE-DBT) uses an iodinated contrast agent to image the threedimensional breast vasculature. The University of Pennsylvania is conducting a CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 postcontrast). A hybrid subtraction scheme is proposed. First, dual-energy (DE) images are obtained by a weighted logarithmic subtraction of the high-energy and low-energy image pairs. Then, post-contrast DE images are subtracted from the pre-contrast DE image. This hybrid temporal subtraction of DE images is performed to analyze iodine uptake, but suffers from motion artifacts. Employing image registration further helps to correct for motion, enhancing the evaluation of vascular kinetics. Registration using ANTS (Advanced Normalization Tools) is performed in an iterative manner. Mutual information optimization first corrects large-scale motions. Normalized cross-correlation optimization then iteratively corrects fine-scale misalignment. Two methods have been evaluated: a 2D method using a slice-by-slice approach, and a 3D method using a volumetric approach to account for out-of-plane breast motion. Our results demonstrate that iterative registration qualitatively improves with each iteration (five iterations total). Motion artifacts near the edge of the breast are corrected effectively and structures within the breast (e.g. blood vessels, surgical clip) are better visualized. Statistical and clinical evaluations of registration accuracy in the CE-DBT images are ongoing.

  19. ANOTHER LOOK AT THE FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHM (FISTA)*

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2017-01-01

    This paper provides a new way of developing the “Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)” [3] that is widely used for minimizing composite convex functions with a nonsmooth term such as the ℓ1 regularizer. In particular, this paper shows that FISTA corresponds to an optimized approach to accelerating the proximal gradient method with respect to a worst-case bound of the cost function. This paper then proposes a new algorithm that is derived by instead optimizing the step coefficients of the proximal gradient method with respect to a worst-case bound of the composite gradient mapping. The proof is based on the worst-case analysis called Performance Estimation Problem in [11]. PMID:29805242

  20. Selecting an Architecture for a Safety-Critical Distributed Computer System with Power, Weight and Cost Considerations

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.

  1. Adaptive harvest management of North American waterfowl populations: a brief history and future prospects

    USGS Publications Warehouse

    Nichols, J.D.; Runge, M.C.; Johnson, F.A.; Williams, B.K.

    2007-01-01

    Since 1995, the US Fish and Wildlife Service has used an adaptive approach to the management of sport harvest of mid-continent Mallard ducks (Anas platyrhynchos) in North America. This approach differs from many current approaches to conservation and management in requiring close collaboration between managers and scientists. Key elements of this process are objectives, alternative management actions, models permitting prediction of system responses, and a monitoring program. The iterative process produces optimal management decisions and leads to reduction in uncertainty about response of populations to management. This general approach to management has a number of desirable features and is recommended for use in many other programs of management and conservation.

  2. Characterization of the ITER CS conductor and projection to the ITER CS performance

    DOE PAGES

    Martovetsky, N.; Isono, T.; Bessette, D.; ...

    2017-06-20

    The ITER Central Solenoid (CS) is one of the critical elements of the machine. The CS conductor went through an intense optimization and qualification program, which included characterization of the strands, a conductor straight short sample testing in the SULTAN facility at the Swiss Plasma Center (SPC), Villigen, Switzerland, and a single-layer CS Insert coil recently tested in the Central Solenoid Model Coil (CSMC) facility in QST-Naka, Japan. In this paper, we obtained valuable data in a wide range of the parameters (current, magnetic field, temperature, and strain), which allowed a credible characterization of the CS conductor in different conditions.more » Finally, using this characterization, we will make a projection to the performance of the CS in the ITER reference scenario.« less

  3. Studies on Flat Sandwich-type Self-Powered Detectors for Flux Measurements in ITER Test Blanket Modules

    NASA Astrophysics Data System (ADS)

    Raj, Prasoon; Angelone, Maurizio; Döring, Toralf; Eberhardt, Klaus; Fischer, Ulrich; Klix, Axel; Schwengner, Ronald

    2018-01-01

    Neutron and gamma flux measurements in designated positions in the test blanket modules (TBM) of ITER will be important tasks during ITER's campaigns. As part of the ongoing task on development of nuclear instrumentation for application in European ITER TBMs, experimental investigations on self-powered detectors (SPD) are undertaken. This paper reports the findings of neutron and photon irradiation tests performed with a test SPD in flat sandwich-like geometry. Whereas both neutrons and gammas can be detected with appropriate optimization of geometries, materials and sizes of the components, the present sandwich-like design is more sensitive to gammas than 14 MeV neutrons. Range of SPD current signals achievable under TBM conditions are predicted based on the SPD sensitivities measured in this work.

  4. Characterization of the ITER CS conductor and projection to the ITER CS performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martovetsky, N.; Isono, T.; Bessette, D.

    The ITER Central Solenoid (CS) is one of the critical elements of the machine. The CS conductor went through an intense optimization and qualification program, which included characterization of the strands, a conductor straight short sample testing in the SULTAN facility at the Swiss Plasma Center (SPC), Villigen, Switzerland, and a single-layer CS Insert coil recently tested in the Central Solenoid Model Coil (CSMC) facility in QST-Naka, Japan. In this paper, we obtained valuable data in a wide range of the parameters (current, magnetic field, temperature, and strain), which allowed a credible characterization of the CS conductor in different conditions.more » Finally, using this characterization, we will make a projection to the performance of the CS in the ITER reference scenario.« less

  5. Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers

    NASA Technical Reports Server (NTRS)

    Guru Prasad, K.; Kane, J. H.

    1992-01-01

    The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.

  6. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  7. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  8. Application of ant colony optimization in development of models for prediction of anti-HIV-1 activity of HEPT derivatives.

    PubMed

    Zare-Shahabadi, Vali; Abbasitabar, Fatemeh

    2010-09-01

    Quantitative structure-activity relationship models were derived for 107 analogs of 1-[(2-hydroxyethoxy) methyl]-6-(phenylthio)thymine, a potent inhibitor of the HIV-1 reverse transcriptase. The activities of these compounds were investigated by means of multiple linear regression (MLR) technique. An ant colony optimization algorithm, called Memorized_ACS, was applied for selecting relevant descriptors and detecting outliers. This algorithm uses an external memory based upon knowledge incorporation from previous iterations. At first, the memory is empty, and then it is filled by running several ACS algorithms. In this respect, after each ACS run, the elite ant is stored in the memory and the process is continued to fill the memory. Here, pheromone updating is performed by all elite ants collected in the memory; this results in improvements in both exploration and exploitation behaviors of the ACS algorithm. The memory is then made empty and is filled again by performing several ACS algorithms using updated pheromone trails. This process is repeated for several iterations. At the end, the memory contains several top solutions for the problem. Number of appearance of each descriptor in the external memory is a good criterion for its importance. Finally, prediction is performed by the elitist ant, and interpretation is carried out by considering the importance of each descriptor. The best MLR model has a training error of 0.47 log (1/EC(50)) units (R(2) = 0.90) and a prediction error of 0.76 log (1/EC(50)) units (R(2) = 0.88). Copyright 2010 Wiley Periodicals, Inc.

  9. PRIM: An Efficient Preconditioning Iterative Reweighted Least Squares Method for Parallel Brain MRI Reconstruction.

    PubMed

    Xu, Zheng; Wang, Sheng; Li, Yeqing; Zhu, Feiyun; Huang, Junzhou

    2018-02-08

    The most recent history of parallel Magnetic Resonance Imaging (pMRI) has in large part been devoted to finding ways to reduce acquisition time. While joint total variation (JTV) regularized model has been demonstrated as a powerful tool in increasing sampling speed for pMRI, however, the major bottleneck is the inefficiency of the optimization method. While all present state-of-the-art optimizations for the JTV model could only reach a sublinear convergence rate, in this paper, we squeeze the performance by proposing a linear-convergent optimization method for the JTV model. The proposed method is based on the Iterative Reweighted Least Squares algorithm. Due to the complexity of the tangled JTV objective, we design a novel preconditioner to further accelerate the proposed method. Extensive experiments demonstrate the superior performance of the proposed algorithm for pMRI regarding both accuracy and efficiency compared with state-of-the-art methods.

  10. Adaptive eigenspace method for inverse scattering problems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nahum, Uri

    2017-02-01

    A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.

  11. From Constructive Field Theory to Fractional Stochastic Calculus. (II) Constructive Proof of Convergence for the Lévy Area of Fractional Brownian Motion with Hurst Index {{alpha} {in} ((1)/(8),(1)/(4))}

    NASA Astrophysics Data System (ADS)

    Magnen, Jacques; Unterberger, Jérémie

    2012-03-01

    {Let $B=(B_1(t),...,B_d(t))$ be a $d$-dimensional fractional Brownian motion with Hurst index $\\alpha<1/4$, or more generally a Gaussian process whose paths have the same local regularity. Defining properly iterated integrals of $B$ is a difficult task because of the low H\\"older regularity index of its paths. Yet rough path theory shows it is the key to the construction of a stochastic calculus with respect to $B$, or to solving differential equations driven by $B$. We intend to show in a series of papers how to desingularize iterated integrals by a weak, singular non-Gaussian perturbation of the Gaussian measure defined by a limit in law procedure. Convergence is proved by using "standard" tools of constructive field theory, in particular cluster expansions and renormalization. These powerful tools allow optimal estimates, and call for an extension of Gaussian tools such as for instance the Malliavin calculus. After a first introductory paper \\cite{MagUnt1}, this one concentrates on the details of the constructive proof of convergence for second-order iterated integrals, also known as L\\'evy area.

  12. Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.

    PubMed

    Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B

    2016-01-01

    We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.

  13. Simulation-based optimization of lattice support structures for offshore wind energy converters with the simultaneous perturbation algorithm

    NASA Astrophysics Data System (ADS)

    Molde, H.; Zwick, D.; Muskulus, M.

    2014-12-01

    Support structures for offshore wind turbines are contributing a large part to the total project cost, and a cost saving of a few percent would have considerable impact. At present support structures are designed with simplified methods, e.g., spreadsheet analysis, before more detailed load calculations are performed. Due to the large number of loadcases only a few semimanual design iterations are typically executed. Computer-assisted optimization algorithms could help to further explore design limits and avoid unnecessary conservatism. In this study the simultaneous perturbation stochastic approximation method developed by Spall in the 1990s was assessed with respect to its suitability for support structure optimization. The method depends on a few parameters and an objective function that need to be chosen carefully. In each iteration the structure is evaluated by time-domain analyses, and joint fatigue lifetimes and ultimate strength utilization are computed from stress concentration factors. A pseudo-gradient is determined from only two analysis runs and the design is adjusted in the direction that improves it the most. The algorithm is able to generate considerably improved designs, compared to other methods, in a few hundred iterations, which is demonstrated for the NOWITECH 10 MW reference turbine.

  14. Numerical Study of High Heat Flux Performances of Flat-Tile Divertor Mock-ups with Hypervapotron Cooling Concept

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Liu, Xiang; Lian, Youyun; Cai, Laizhong

    2015-09-01

    The hypervapotron (HV), as an enhanced heat transfer technique, will be used for ITER divertor components in the dome region as well as the enhanced heat flux first wall panels. W-Cu brazing technology has been developed at SWIP (Southwestern Institute of Physics), and one W/CuCrZr/316LN component of 450 mm×52 mm×166 mm with HV cooling channels will be fabricated for high heat flux (HHF) tests. Before that a relevant analysis was carried out to optimize the structure of divertor component elements. ANSYS-CFX was used in CFD analysis and ABAQUS was adopted for thermal-mechanical calculations. Commercial code FE-SAFE was adopted to compute the fatigue life of the component. The tile size, thickness of tungsten tiles and the slit width among tungsten tiles were optimized and its HHF performances under International Thermonuclear Experimental Reactor (ITER) loading conditions were simulated. One brand new tokamak HL-2M with advanced divertor configuration is under construction in SWIP, where ITER-like flat-tile divertor components are adopted. This optimized design is expected to supply valuable data for HL-2M tokamak. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2011GB110001 and 2011GB110004)

  15. Application of a derivative-free global optimization algorithm to the derivation of a new time integration scheme for the simulation of incompressible turbulence

    NASA Astrophysics Data System (ADS)

    Alimohammadi, Shahrouz; Cavaglieri, Daniele; Beyhaghi, Pooriya; Bewley, Thomas R.

    2016-11-01

    This work applies a recently developed Derivative-free optimization algorithm to derive a new mixed implicit-explicit (IMEX) time integration scheme for Computational Fluid Dynamics (CFD) simulations. This algorithm allows imposing a specified order of accuracy for the time integration and other important stability properties in the form of nonlinear constraints within the optimization problem. In this procedure, the coefficients of the IMEX scheme should satisfy a set of constraints simultaneously. Therefore, the optimization process, at each iteration, estimates the location of the optimal coefficients using a set of global surrogates, for both the objective and constraint functions, as well as a model of the uncertainty function of these surrogates based on the concept of Delaunay triangulation. This procedure has been proven to converge to the global minimum of the constrained optimization problem provided the constraints and objective functions are twice differentiable. As a result, a new third-order, low-storage IMEX Runge-Kutta time integration scheme is obtained with remarkably fast convergence. Numerical tests are then performed leveraging the turbulent channel flow simulations to validate the theoretical order of accuracy and stability properties of the new scheme.

  16. A Kronecker product splitting preconditioner for two-dimensional space-fractional diffusion equations

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Lv, Wen; Zhang, Tongtong

    2018-05-01

    We study preconditioned iterative methods for the linear system arising in the numerical discretization of a two-dimensional space-fractional diffusion equation. Our approach is based on a formulation of the discrete problem that is shown to be the sum of two Kronecker products. By making use of an alternating Kronecker product splitting iteration technique we establish a class of fixed-point iteration methods. Theoretical analysis shows that the new method converges to the unique solution of the linear system. Moreover, the optimal choice of the involved iteration parameters and the corresponding asymptotic convergence rate are computed exactly when the eigenvalues of the system matrix are all real. The basic iteration is accelerated by a Krylov subspace method like GMRES. The corresponding preconditioner is in a form of a Kronecker product structure and requires at each iteration the solution of a set of discrete one-dimensional fractional diffusion equations. We use structure preserving approximations to the discrete one-dimensional fractional diffusion operators in the action of the preconditioning matrix. Numerical examples are presented to illustrate the effectiveness of this approach.

  17. A Novel Optimization Technique to Improve Gas Recognition by Electronic Noses Based on the Enhanced Krill Herd Algorithm

    PubMed Central

    Wang, Li; Jia, Pengfei; Huang, Tailai; Duan, Shukai; Yan, Jia; Wang, Lidan

    2016-01-01

    An electronic nose (E-nose) is an intelligent system that we will use in this paper to distinguish three indoor pollutant gases (benzene (C6H6), toluene (C7H8), formaldehyde (CH2O)) and carbon monoxide (CO). The algorithm is a key part of an E-nose system mainly composed of data processing and pattern recognition. In this paper, we employ support vector machine (SVM) to distinguish indoor pollutant gases and two of its parameters need to be optimized, so in order to improve the performance of SVM, in other words, to get a higher gas recognition rate, an effective enhanced krill herd algorithm (EKH) based on a novel decision weighting factor computing method is proposed to optimize the two SVM parameters. Krill herd (KH) is an effective method in practice, however, on occasion, it cannot avoid the influence of some local best solutions so it cannot always find the global optimization value. In addition its search ability relies fully on randomness, so it cannot always converge rapidly. To address these issues we propose an enhanced KH (EKH) to improve the global searching and convergence speed performance of KH. To obtain a more accurate model of the krill behavior, an updated crossover operator is added to the approach. We can guarantee the krill group are diversiform at the early stage of iterations, and have a good performance in local searching ability at the later stage of iterations. The recognition results of EKH are compared with those of other optimization algorithms (including KH, chaotic KH (CKH), quantum-behaved particle swarm optimization (QPSO), particle swarm optimization (PSO) and genetic algorithm (GA)), and we can find that EKH is better than the other considered methods. The research results verify that EKH not only significantly improves the performance of our E-nose system, but also provides a good beginning and theoretical basis for further study about other improved krill algorithms’ applications in all E-nose application areas. PMID:27529247

  18. SPECT System Optimization Against A Discrete Parameter Space

    PubMed Central

    Meng, L. J.; Li, N.

    2013-01-01

    In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609

  19. Test of prototype ITER vacuum ultraviolet spectrometer and its application to impurity study in KSTAR plasmas.

    PubMed

    Seon, C R; Hong, J H; Jang, J; Lee, S H; Choe, W; Lee, H H; Cheon, M S; Pak, S; Lee, H G; Biel, W; Barnsley, R

    2014-11-01

    To optimize the design of ITER vacuum ultraviolet (VUV) spectrometer, a prototype VUV spectrometer was developed. The sensitivity calibration curve of the spectrometer was calculated from the mirror reflectivity, the grating efficiency, and the detector efficiency. The calibration curve was consistent with the calibration points derived in the experiment using the calibrated hollow cathode lamp. For the application of the prototype ITER VUV spectrometer, the prototype spectrometer was installed at KSTAR, and various impurity emission lines could be measured. By analyzing about 100 shots, strong positive correlation between the O VI and the C IV emission intensities could be found.

  20. MWR3C physical retrievals of precipitable water vapor and cloud liquid water path

    DOE Data Explorer

    Cadeddu, Maria

    2016-10-12

    The data set contains physical retrievals of PWV and cloud LWP retrieved from MWR3C measurements during the MAGIC campaign. Additional data used in the retrieval process include radiosondes and ceilometer. The retrieval is based on an optimal estimation technique that starts from a first guess and iteratively repeats the forward model calculations until a predefined convergence criterion is satisfied. The first guess is a vector of [PWV,LWP] from the neural network retrieval fields in the netcdf file. When convergence is achieved the 'a posteriori' covariance is computed and its square root is expressed in the file as the retrieval 1-sigma uncertainty. The closest radiosonde profile is used for the radiative transfer calculations and ceilometer data are used to constrain the cloud base height. The RMS error between the brightness temperatures is computed at the last iterations as a consistency check and is written in the last column of the output file.

  1. SU-E-T-446: Group-Sparsity Based Angle Generation Method for Beam Angle Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, H

    2015-06-15

    Purpose: This work is to develop the effective algorithm for beam angle optimization (BAO), with the emphasis on enabling further improvement from existing treatment-dependent templates based on clinical knowledge and experience. Methods: The proposed BAO algorithm utilizes a priori beam angle templates as the initial guess, and iteratively generates angular updates for this initial set, namely angle generation method, with improved dose conformality that is quantitatively measured by the objective function. That is, during each iteration, we select “the test angle” in the initial set, and use group-sparsity based fluence map optimization to identify “the candidate angle” for updating “themore » test angle”, for which all the angles in the initial set except “the test angle”, namely “the fixed set”, are set free, i.e., with no group-sparsity penalty, and the rest of angles including “the test angle” during this iteration are in “the working set”. And then “the candidate angle” is selected with the smallest objective function value from the angles in “the working set” with locally maximal group sparsity, and replaces “the test angle” if “the fixed set” with “the candidate angle” has a smaller objective function value by solving the standard fluence map optimization (with no group-sparsity regularization). Similarly other angles in the initial set are in turn selected as “the test angle” for angular updates and this chain of updates is iterated until no further new angular update is identified for a full loop. Results: The tests using the MGH public prostate dataset demonstrated the effectiveness of the proposed BAO algorithm. For example, the optimized angular set from the proposed BAO algorithm was better the MGH template. Conclusion: A new BAO algorithm is proposed based on the angle generation method via group sparsity, with improved dose conformality from the given template. Hao Gao was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  2. Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction

    PubMed Central

    Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.

    2018-01-01

    Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, F.; Banks, J. W.; Henshaw, W. D.

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  4. Adaptive management for subsurface pressure and plume control in application to geological CO2 storage

    NASA Astrophysics Data System (ADS)

    Gonzalez-Nicolas, A.; Cihan, A.; Birkholzer, J. T.; Petrusak, R.; Zhou, Q.; Riestenberg, D. E.; Trautz, R. C.; Godec, M.

    2016-12-01

    Industrial-scale injection of CO2 into the subsurface can cause reservoir pressure increases that must be properly controlled to prevent any potential environmental impact. Excessive pressure buildup in reservoir may result in ground water contamination stemming from leakage through conductive pathways, such as improperly plugged abandoned wells or distant faults, and the potential for fault reactivation and possibly seal breaching. Brine extraction is a viable approach for managing formation pressure, effective stress, and plume movement during industrial-scale CO2 injection projects. The main objectives of this study are to investigate suitable different pressure management strategies involving active brine extraction and passive pressure relief wells. Adaptive optimized management of CO2 storage projects utilizes the advanced automated optimization algorithms and suitable process models. The adaptive management integrates monitoring, forward modeling, inversion modeling and optimization through an iterative process. In this study, we employ an adaptive framework to understand primarily the effects of initial site characterization and frequency of the model update (calibration) and optimization calculations for controlling extraction rates based on the monitoring data on the accuracy and the success of the management without violating pressure buildup constraints in the subsurface reservoir system. We will present results of applying the adaptive framework to test appropriateness of different management strategies for a realistic field injection project.

  5. A Method for Combining Experimentation and Molecular Dynamics Simulation to Improve Cohesive Zone Models for Metallic Microstructures

    NASA Technical Reports Server (NTRS)

    Hochhalter, J. D.; Glaessgen, E. H.; Ingraffea, A. R.; Aquino, W. A.

    2009-01-01

    Fracture processes within a material begin at the nanometer length scale at which the formation, propagation, and interaction of fundamental damage mechanisms occur. Physics-based modeling of these atomic processes quickly becomes computationally intractable as the system size increases. Thus, a multiscale modeling method, based on the aggregation of fundamental damage processes occurring at the nanoscale within a cohesive zone model, is under development and will enable computationally feasible and physically meaningful microscale fracture simulation in polycrystalline metals. This method employs atomistic simulation to provide an optimization loop with an initial prediction of a cohesive zone model (CZM). This initial CZM is then applied at the crack front region within a finite element model. The optimization procedure iterates upon the CZM until the finite element model acceptably reproduces the near-crack-front displacement fields obtained from experimental observation. With this approach, a comparison can be made between the original CZM predicted by atomistic simulation and the converged CZM that is based on experimental observation. Comparison of the two CZMs gives insight into how atomistic simulation scales.

  6. Optimizing human activity patterns using global sensitivity analysis.

    PubMed

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  7. Optimizing human activity patterns using global sensitivity analysis

    PubMed Central

    Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2014-01-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080

  8. An Efficient Approach for Lipase-Catalyzed Synthesis of Retinyl Laurate Nutraceutical by Combining Ultrasound Assistance and Artificial Neural Network Optimization.

    PubMed

    Huang, Shang-Ming; Li, Hsin-Ju; Liu, Yung-Chuan; Kuo, Chia-Hung; Shieh, Chwen-Jen

    2017-11-15

    Although retinol is an important nutrient, retinol is highly sensitive to oxidation. At present, some ester forms of retinol are generally used in nutritional supplements because of its stability and bioavailability. However, such esters are commonly synthesized by chemical procedures which are harmful to the environment. Thus, this study utilized a green method using lipase as a catalyst with sonication assistance to produce a retinol derivative named retinyl laurate. Moreover, the process was optimized by an artificial neural network (ANN). First, a three-level-four-factor central composite design (CCD) was employed to design 27 experiments, which the highest relative conversion was 82.64%. Further, the optimal architecture of the CCD-employing ANN was developed, including the learning Levenberg-Marquardt algorithm, the transfer function (hyperbolic tangent), iterations (10,000), and the nodes of the hidden layer (6). The best performance of the ANN was evaluated by the root mean squared error (RMSE) and the coefficient of determination ( R ²) from predicting and observed data, which displayed a good data-fitting property. Finally, the process performed with optimal parameters actually obtained a relative conversion of 88.31% without long-term reactions, and the lipase showed great reusability for biosynthesis. Thus, this study utilizes green technology to efficiently produce retinyl laurate, and the bioprocess is well established by ANN-mediated modeling and optimization.

  9. Majorization as a Tool for Optimizing a Class of Matrix Functions.

    ERIC Educational Resources Information Center

    Kiers, Henk A.

    1990-01-01

    General algorithms are presented that can be used for optimizing matrix trace functions subject to certain constraints on the parameters. The parameter set that minimizes the majorizing function also decreases the matrix trace function, providing a monotonically convergent algorithm for minimizing the matrix trace function iteratively. (SLD)

  10. Inference of the sparse kinetic Ising model using the decimation method

    NASA Astrophysics Data System (ADS)

    Decelle, Aurélien; Zhang, Pan

    2015-05-01

    In this paper we study the inference of the kinetic Ising model on sparse graphs by the decimation method. The decimation method, which was first proposed in Decelle and Ricci-Tersenghi [Phys. Rev. Lett. 112, 070603 (2014), 10.1103/PhysRevLett.112.070603] for the static inverse Ising problem, tries to recover the topology of the inferred system by setting the weakest couplings to zero iteratively. During the decimation process the likelihood function is maximized over the remaining couplings. Unlike the ℓ1-optimization-based methods, the decimation method does not use the Laplace distribution as a heuristic choice of prior to select a sparse solution. In our case, the whole process can be done auto-matically without fixing any parameters by hand. We show that in the dynamical inference problem, where the task is to reconstruct the couplings of an Ising model given the data, the decimation process can be applied naturally into a maximum-likelihood optimization algorithm, as opposed to the static case where pseudolikelihood method needs to be adopted. We also use extensive numerical studies to validate the accuracy of our methods in dynamical inference problems. Our results illustrate that, on various topologies and with different distribution of couplings, the decimation method outperforms the widely used ℓ1-optimization-based methods.

  11. Adaptive iterative design (AID): a novel approach for evaluating the interactive effects of multiple stressors on aquatic organisms.

    PubMed

    Glaholt, Stephen P; Chen, Celia Y; Demidenko, Eugene; Bugge, Deenie M; Folt, Carol L; Shaw, Joseph R

    2012-08-15

    The study of stressor interactions by eco-toxicologists using nonlinear response variables is limited by required amounts of a priori knowledge, complexity of experimental designs, the use of linear models, and the lack of use of optimal designs of nonlinear models to characterize complex interactions. Therefore, we developed AID, an adaptive-iterative design for eco-toxicologist to more accurately and efficiently examine complex multiple stressor interactions. AID incorporates the power of the general linear model and A-optimal criteria with an iterative process that: 1) minimizes the required amount of a priori knowledge, 2) simplifies the experimental design, and 3) quantifies both individual and interactive effects. Once a stable model is determined, the best fit model is identified and the direction and magnitude of stressors, individually and all combinations (including complex interactions) are quantified. To validate AID, we selected five commonly co-occurring components of polluted aquatic systems, three metal stressors (Cd, Zn, As) and two water chemistry parameters (pH, hardness) to be tested using standard acute toxicity tests in which Daphnia mortality is the (nonlinear) response variable. We found after the initial data input of experimental data, although literature values (e.g. EC-values) may also be used, and after only two iterations of AID, our dose response model was stable. The model ln(Cd)*ln(Zn) was determined the best predictor of Daphnia mortality response to the combined effects of Cd, Zn, As, pH, and hardness. This model was then used to accurately identify and quantify the strength of both greater- (e.g. As*Cd) and less-than additive interactions (e.g. Cd*Zn). Interestingly, our study found only binary interactions significant, not higher order interactions. We conclude that AID is more efficient and effective at assessing multiple stressor interactions than current methods. Other applications, including life-history endpoints commonly used by regulators, could benefit from AID's efficiency in assessing water quality criteria. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Block-Parallel Data Analysis with DIY2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less

  13. Load forecasting via suboptimal seasonal autoregressive models and iteratively reweighted least squares estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbamalu, G.A.N.; El-Hawary, M.E.

    The authors propose suboptimal least squares or IRWLS procedures for estimating the parameters of a seasonal multiplicative AR model encountered during power system load forecasting. The proposed method involves using an interactive computer environment to estimate the parameters of a seasonal multiplicative AR process. The method comprises five major computational steps. The first determines the order of the seasonal multiplicative AR process, and the second uses the least squares or the IRWLS to estimate the optimal nonseasonal AR model parameters. In the third step one obtains the intermediate series by back forecast, which is followed by using the least squaresmore » or the IRWLS to estimate the optimal season AR parameters. The final step uses the estimated parameters to forecast future load. The method is applied to predict the Nova Scotia Power Corporation's 168 lead time hourly load. The results obtained are documented and compared with results based on the Box and Jenkins method.« less

  14. Aerothermodynamic testing requirements for future space transportation systems

    NASA Technical Reports Server (NTRS)

    Paulson, John W., Jr.; Miller, Charles G., III

    1995-01-01

    Aerothermodynamics, encompassing aerodynamics, aeroheating, and fluid dynamic and physical processes, is the genesis for the design and development of advanced space transportation vehicles. It provides crucial information to other disciplines involved in the development process such as structures, materials, propulsion, and avionics. Sources of aerothermodynamic information include ground-based facilities, computational fluid dynamic (CFD) and engineering computer codes, and flight experiments. Utilization of this triad is required to provide the optimum requirements while reducing undue design conservatism, risk, and cost. This paper discusses the role of ground-based facilities in the design of future space transportation system concepts. Testing methodology is addressed, including the iterative approach often required for the assessment and optimization of configurations from an aerothermodynamic perspective. The influence of vehicle shape and the transition from parametric studies for optimization to benchmark studies for final design and establishment of the flight data book is discussed. Future aerothermodynamic testing requirements including the need for new facilities are also presented.

  15. TARCMO: Theory and Algorithms for Robust, Combinatorial, Multicriteria Optimization

    DTIC Science & Technology

    2016-11-28

    objective 9 4.6 On The Recoverable Robust Traveling Salesman Problem . . . . . 11 4.7 A Bicriteria Approach to Robust Optimization...be found. 4.6 On The Recoverable Robust Traveling Salesman Problem The traveling salesman problem (TSP) is a well-known combinatorial optimiza- tion...procedure for the robust traveling salesman problem . While this iterative algorithms results in an optimal solution to the robust TSP, computation

  16. TU-AB-303-12: Towards Inter and Intra Fraction Plan Adaptation for the MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontaxis, C; Bol, G; Lagendijk, J

    Purpose: To develop a new sequencer for IMRT that during treatment can account for anatomy changes provided by online and real-time MRI. This sequencer employs a novel inter and intra fraction scheme that converges to the prescribed dose without a final segment weight optimization (SWO) and enables immediate optimization and delivery of radiation adapted to the deformed anatomy. Methods: The sequencer is initially supplied with a voxel-based dose prescription and during the optimization iteratively generates segments that provide this prescribed dose. Every iteration selects the best segment for the current anatomy state, calculates the dose it will deliver, warps itmore » back to the reference prescription grid and subtracts it from the remaining prescribed dose. This process continues until a certain percentage of dose or a number of segments has been delivered. The anatomy changes that occur during treatment require that convergence is achieved without a final SWO. This is resolved by adding the difference between the prescribed and delivered dose up to this fraction to the prescription of the subsequent fraction. This process is repeated for all fractions of the treatment. Results: Two breast cases were selected to stress test the pipeline by producing artificial inter and intra fraction anatomy deformations using a combination of incrementally applied rigid transformations. The dose convergence of the adaptive scheme over the entire treatment, relative to the prescribed dose, was on average 8.6% higher than the static plans delivered to the respective deformed anatomies and only 1.6% less than the static segment weighted plans on the static anatomy. Conclusion: This new adaptive sequencing strategy enables dose convergence without the need of SWO while adapting the plan to intermediate anatomies, which is a prerequisite for online plan adaptation. We are now testing our pipeline on prostate cases using clinical anatomy deformation data from our department. This work is financially supported by Elekta AB, Stockholm, Sweden.« less

  17. Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach

    NASA Astrophysics Data System (ADS)

    Kumral, Mustafa; Ozer, Umit

    2013-03-01

    Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution iteratively. A case study was conducted to demonstrate the performance of approach. The findings showed that the approach could be used to plan a new drilling campaign.

  18. Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.

    PubMed

    Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery

    The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.

  19. High resolution x-ray CMT: Reconstruction methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.K.

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less

  20. Optimal design of gene knockout experiments for gene regulatory network inference

    PubMed Central

    Ud-Dean, S. M. Minhaz; Gunawan, Rudiyanto

    2016-01-01

    Motivation: We addressed the problem of inferring gene regulatory network (GRN) from gene expression data of knockout (KO) experiments. This inference is known to be underdetermined and the GRN is not identifiable from data. Past studies have shown that suboptimal design of experiments (DOE) contributes significantly to the identifiability issue of biological networks, including GRNs. However, optimizing DOE has received much less attention than developing methods for GRN inference. Results: We developed REDuction of UnCertain Edges (REDUCE) algorithm for finding the optimal gene KO experiment for inferring directed graphs (digraphs) of GRNs. REDUCE employed ensemble inference to define uncertain gene interactions that could not be verified by prior data. The optimal experiment corresponds to the maximum number of uncertain interactions that could be verified by the resulting data. For this purpose, we introduced the concept of edge separatoid which gave a list of nodes (genes) that upon their removal would allow the verification of a particular gene interaction. Finally, we proposed a procedure that iterates over performing KO experiments, ensemble update and optimal DOE. The case studies including the inference of Escherichia coli GRN and DREAM 4 100-gene GRNs, demonstrated the efficacy of the iterative GRN inference. In comparison to systematic KOs, REDUCE could provide much higher information return per gene KO experiment and consequently more accurate GRN estimates. Conclusions: REDUCE represents an enabling tool for tackling the underdetermined GRN inference. Along with advances in gene deletion and automation technology, the iterative procedure brings an efficient and fully automated GRN inference closer to reality. Availability and implementation: MATLAB and Python scripts of REDUCE are available on www.cabsel.ethz.ch/tools/REDUCE. Contact: rudi.gunawan@chem.ethz.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26568633

  1. Fourier analysis of the SOR iteration

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Trefethen, L. N.

    1986-01-01

    The SOR iteration for solving linear systems of equations depends upon an overrelaxation factor omega. It is shown that for the standard model problem of Poisson's equation on a rectangle, the optimal omega and corresponding convergence rate can be rigorously obtained by Fourier analysis. The trick is to tilt the space-time grid so that the SOR stencil becomes symmetrical. The tilted grid also gives insight into the relation between convergence rates of several variants.

  2. Kalman Filter for Calibrating a Telescope Focal Plane

    NASA Technical Reports Server (NTRS)

    Kang, Bryan; Bayard, David

    2006-01-01

    The instrument-pointing frame (IPF) Kalman filter, and an algorithm that implements this filter, have been devised for calibrating the focal plane of a telescope. As used here, calibration signifies, more specifically, a combination of measurements and calculations directed toward ensuring accuracy in aiming the telescope and determining the locations of objects imaged in various arrays of photodetectors in instruments located on the focal plane. The IPF Kalman filter was originally intended for application to a spaceborne infrared astronomical telescope, but can also be applied to other spaceborne and ground-based telescopes. In the traditional approach to calibration of a telescope, (1) one team of experts concentrates on estimating parameters (e.g., pointing alignments and gyroscope drifts) that are classified as being of primarily an engineering nature, (2) another team of experts concentrates on estimating calibration parameters (e.g., plate scales and optical distortions) that are classified as being primarily of a scientific nature, and (3) the two teams repeatedly exchange data in an iterative process in which each team refines its estimates with the help of the data provided by the other team. This iterative process is inefficient and uneconomical because it is time-consuming and entails the maintenance of two survey teams and the development of computer programs specific to the requirements of each team. Moreover, theoretical analysis reveals that the engineering/ science iterative approach is not optimal in that it does not yield the best estimates of focal-plane parameters and, depending on the application, may not even enable convergence toward a set of estimates.

  3. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.

    1986-01-01

    The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.

  4. Comparing implementations of penalized weighted least-squares sinogram restoration.

    PubMed

    Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

    2010-11-01

    A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors' previous penalized-likelihood implementation. Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes.

  5. Matching pursuit parallel decomposition of seismic data

    NASA Astrophysics Data System (ADS)

    Li, Chuanhui; Zhang, Fanchang

    2017-07-01

    In order to improve the computation speed of matching pursuit decomposition of seismic data, a matching pursuit parallel algorithm is designed in this paper. We pick a fixed number of envelope peaks from the current signal in every iteration according to the number of compute nodes and assign them to the compute nodes on average to search the optimal Morlet wavelets in parallel. With the help of parallel computer systems and Message Passing Interface, the parallel algorithm gives full play to the advantages of parallel computing to significantly improve the computation speed of the matching pursuit decomposition and also has good expandability. Besides, searching only one optimal Morlet wavelet by every compute node in every iteration is the most efficient implementation.

  6. Mask patterning process using the negative tone chemically amplified resist TOK OEBR-CAN024

    NASA Astrophysics Data System (ADS)

    Irmscher, Mathias; Beyer, Dirk; Butschke, Joerg; Hudek, Peter; Koepernik, Corinna; Plumhoff, Jason; Rausa, Emmanuel; Sato, Mitsuru; Voehringer, Peter

    2004-08-01

    Optimized process parameters using the TOK OEBR-CAN024 resist for high chrome load patterning have been determined. A tight linearity tolerance for opaque and clear features, independent on the local pattern density, was the goal of our process integration work. For this purpose we evaluated a new correction method taking into account electron scattering and process influences. The method is based on matching of measured pattern geometry by iterative back-simulation using multiple Gauss and/or exponential functions. The obtained control function acts as input for the proximity correction software PROXECCO. Approaches with different pattern oversize and two Cr thicknesses were accomplished and the results have been reported. Isolated opaque and clear lines could be realized in a very tight linearity range. The increasing line width of small dense lines, induced by the etching process, could be corrected only partially.

  7. Optimizing the availability of a buffered industrial process

    DOEpatents

    Martz, Jr., Harry F.; Hamada, Michael S.; Koehler, Arthur J.; Berg, Eric C.

    2004-08-24

    A computer-implemented process determines optimum configuration parameters for a buffered industrial process. A population size is initialized by randomly selecting a first set of design and operation values associated with subsystems and buffers of the buffered industrial process to form a set of operating parameters for each member of the population. An availability discrete event simulation (ADES) is performed on each member of the population to determine the product-based availability of each member. A new population is formed having members with a second set of design and operation values related to the first set of design and operation values through a genetic algorithm and the product-based availability determined by the ADES. Subsequent population members are then determined by iterating the genetic algorithm with product-based availability determined by ADES to form improved design and operation values from which the configuration parameters are selected for the buffered industrial process.

  8. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  9. Research on material removal accuracy analysis and correction of removal function during ion beam figuring

    NASA Astrophysics Data System (ADS)

    Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin

    2016-09-01

    Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.

  10. A global stochastic programming approach for the optimal placement of gas detectors with nonuniform unavailabilities

    DOE PAGES

    Liu, Jianfeng; Laird, Carl Damon

    2017-09-22

    Optimal design of a gas detection systems is challenging because of the numerous sources of uncertainty, including weather and environmental conditions, leak location and characteristics, and process conditions. Rigorous CFD simulations of dispersion scenarios combined with stochastic programming techniques have been successfully applied to the problem of optimal gas detector placement; however, rigorous treatment of sensor failure and nonuniform unavailability has received less attention. To improve reliability of the design, this paper proposes a problem formulation that explicitly considers nonuniform unavailabilities and all backup detection levels. The resulting sensor placement problem is a large-scale mixed-integer nonlinear programming (MINLP) problem thatmore » requires a tailored solution approach for efficient solution. We have developed a multitree method which depends on iteratively solving a sequence of upper-bounding master problems and lower-bounding subproblems. The tailored global solution strategy is tested on a real data problem and the encouraging numerical results indicate that our solution framework is promising in solving sensor placement problems. This study was selected for the special issue in JLPPI from the 2016 International Symposium of the MKO Process Safety Center.« less

  11. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  12. A global stochastic programming approach for the optimal placement of gas detectors with nonuniform unavailabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jianfeng; Laird, Carl Damon

    Optimal design of a gas detection systems is challenging because of the numerous sources of uncertainty, including weather and environmental conditions, leak location and characteristics, and process conditions. Rigorous CFD simulations of dispersion scenarios combined with stochastic programming techniques have been successfully applied to the problem of optimal gas detector placement; however, rigorous treatment of sensor failure and nonuniform unavailability has received less attention. To improve reliability of the design, this paper proposes a problem formulation that explicitly considers nonuniform unavailabilities and all backup detection levels. The resulting sensor placement problem is a large-scale mixed-integer nonlinear programming (MINLP) problem thatmore » requires a tailored solution approach for efficient solution. We have developed a multitree method which depends on iteratively solving a sequence of upper-bounding master problems and lower-bounding subproblems. The tailored global solution strategy is tested on a real data problem and the encouraging numerical results indicate that our solution framework is promising in solving sensor placement problems. This study was selected for the special issue in JLPPI from the 2016 International Symposium of the MKO Process Safety Center.« less

  13. Dynamic load simulator

    NASA Technical Reports Server (NTRS)

    Joncas, K. P.

    1972-01-01

    Concepts and techniques for identifying and simulating both the steady state and dynamic characteristics of electrical loads for use during integrated system test and evaluation are discussed. The investigations showed that it is feasible to design and develop interrogation and simulation equipment to perform the desired functions. During the evaluation, actual spacecraft loads were interrogated by stimulating the loads with their normal input voltage and measuring the resultant voltage and current time histories. Elements of the circuits were optimized by an iterative process of selecting element values and comparing the time-domain response of the model with those obtained from the real equipment during interrogation.

  14. A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection

    PubMed Central

    Thounaojam, Dalton Meitei; Khelchandra, Thongam; Singh, Kh. Manglem; Roy, Sudipta

    2016-01-01

    This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter. PMID:27127500

  15. Hybrid propulsion technology program: Phase 1. Volume 3: Thiokol Corporation Space Operations

    NASA Technical Reports Server (NTRS)

    Schuler, A. L.; Wiley, D. R.

    1989-01-01

    Three candidate hybrid propulsion (HP) concepts were identified, optimized, evaluated, and refined through an iterative process that continually forced improvement to the systems with respect to safety, reliability, cost, and performance criteria. A full scale booster meeting Advanced Solid Rocket Motor (ASRM) thrust-time constraints and a booster application for 1/4 ASRM thrust were evaluated. Trade studies and analyses were performed for each of the motor elements related to SRM technology. Based on trade study results, the optimum HP concept for both full and quarter sized systems was defined. The three candidate hybrid concepts evaluated are illustrated.

  16. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    NASA Astrophysics Data System (ADS)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  17. Multilevel Optimization Framework for Hierarchical Stiffened Shells Accelerated by Adaptive Equivalent Strategy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong

    2017-06-01

    In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.

  18. Towards enhancement of performance of K-means clustering using nature-inspired optimization algorithms.

    PubMed

    Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  19. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    PubMed Central

    Deb, Suash; Yang, Xin-She

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  20. Distributed Method to Optimal Profile Descent

    NASA Astrophysics Data System (ADS)

    Kim, Geun I.

    Current ground automation tools for Optimal Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD optimization problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD optimization problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This method divides the optimization problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an optimal solution is achieved thus decentralizing the computation of the optimization problem.

  1. Optimal two-stage dynamic treatment regimes from a classification perspective with censored survival data.

    PubMed

    Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie

    2018-05-18

    Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.

  2. An optimization approach for fitting canonical tensor decompositions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less

  3. An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.

    PubMed

    Zhang, Ye; Yu, Tenglong; Wang, Wenwu

    2014-01-01

    Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  4. Accurate Micro-Tool Manufacturing by Iterative Pulsed-Laser Ablation

    NASA Astrophysics Data System (ADS)

    Warhanek, Maximilian; Mayr, Josef; Dörig, Christian; Wegener, Konrad

    2017-12-01

    Iterative processing solutions, including multiple cycles of material removal and measurement, are capable of achieving higher geometric accuracy by compensating for most deviations manifesting directly on the workpiece. Remaining error sources are the measurement uncertainty and the repeatability of the material-removal process including clamping errors. Due to the lack of processing forces, process fluids and wear, pulsed-laser ablation has proven high repeatability and can be realized directly on a measuring machine. This work takes advantage of this possibility by implementing an iterative, laser-based correction process for profile deviations registered directly on an optical measurement machine. This way efficient iterative processing is enabled, which is precise, applicable for all tool materials including diamond and eliminates clamping errors. The concept is proven by a prototypical implementation on an industrial tool measurement machine and a nanosecond fibre laser. A number of measurements are performed on both the machine and the processed workpieces. Results show production deviations within 2 μm diameter tolerance.

  5. SU-F-T-201: Acceleration of Dose Optimization Process Using Dual-Loop Optimization Technique for Spot Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Fujimoto, R

    Purpose: The purpose was to demonstrate a developed acceleration technique of dose optimization and to investigate its applicability to the optimization process in a treatment planning system (TPS) for proton therapy. Methods: In the developed technique, the dose matrix is divided into two parts, main and halo, based on beam sizes. The boundary of the two parts is varied depending on the beam energy and water equivalent depth by utilizing the beam size as a singular threshold parameter. The optimization is executed with two levels of iterations. In the inner loop, doses from the main part are updated, whereas dosesmore » from the halo part remain constant. In the outer loop, the doses from the halo part are recalculated. We implemented this technique to the optimization process in the TPS and investigated the dependence on the target volume of the speedup effect and applicability to the worst-case optimization (WCO) in benchmarks. Results: We created irradiation plans for various cubic targets and measured the optimization time varying the target volume. The speedup effect was improved as the target volume increased, and the calculation speed increased by a factor of six for a 1000 cm3 target. An IMPT plan for the RTOG benchmark phantom was created in consideration of ±3.5% range uncertainties using the WCO. Beams were irradiated at 0, 45, and 315 degrees. The target’s prescribed dose and OAR’s Dmax were set to 3 Gy and 1.5 Gy, respectively. Using the developed technique, the calculation speed increased by a factor of 1.5. Meanwhile, no significant difference in the calculated DVHs was found before and after incorporating the technique into the WCO. Conclusion: The developed technique could be adapted to the TPS’s optimization. The technique was effective particularly for large target cases.« less

  6. Topology optimization of natural convection: Flow in a differentially heated cavity

    NASA Astrophysics Data System (ADS)

    Saglietti, Clio; Schlatter, Philipp; Berggren, Martin; Henningson, Dan

    2017-11-01

    The goal of the present work is to develop methods for optimization of the design of natural convection cooled heat sinks, using resolved simulation of both fluid flow and heat transfer. We rely on mathematical programming techniques combined with direct numerical simulations in order to iteratively update the topology of a solid structure towards optimality, i.e. until the design yielding the best performance is found, while satisfying a specific set of constraints. The investigated test case is a two-dimensional differentially heated cavity, in which the two vertical walls are held at different temperatures. The buoyancy force induces a swirling convective flow around a solid structure, whose topology is optimized to maximize the heat flux through the cavity. We rely on the spectral-element code Nek5000 to compute a high-order accurate solution of the natural convection flow arising from the conjugate heat transfer in the cavity. The laminar, steady-state solution of the problem is evaluated with a time-marching scheme that has an increased convergence rate; the actual iterative optimization is obtained using a steepest-decent algorithm, and the gradients are conveniently computed using the continuous adjoint equations for convective heat transfer.

  7. MAP: an iterative experimental design methodology for the optimization of catalytic search space structure modeling.

    PubMed

    Baumes, Laurent A

    2006-01-01

    One of the main problems in high-throughput research for materials is still the design of experiments. At early stages of discovery programs, purely exploratory methodologies coupled with fast screening tools should be employed. This should lead to opportunities to find unexpected catalytic results and identify the "groups" of catalyst outputs, providing well-defined boundaries for future optimizations. However, very few new papers deal with strategies that guide exploratory studies. Mostly, traditional designs, homogeneous covering, or simple random samplings are exploited. Typical catalytic output distributions exhibit unbalanced datasets for which an efficient learning is hardly carried out, and interesting but rare classes are usually unrecognized. Here is suggested a new iterative algorithm for the characterization of the search space structure, working independently of learning processes. It enhances recognition rates by transferring catalysts to be screened from "performance-stable" space zones to "unsteady" ones which necessitate more experiments to be well-modeled. The evaluation of new algorithm attempts through benchmarks is compulsory due to the lack of past proofs about their efficiency. The method is detailed and thoroughly tested with mathematical functions exhibiting different levels of complexity. The strategy is not only empirically evaluated, the effect or efficiency of sampling on future Machine Learning performances is also quantified. The minimum sample size required by the algorithm for being statistically discriminated from simple random sampling is investigated.

  8. Technology Education Using a Novel Approach in e-Learning-Towards Optimizing the Quality of Learning Outcomes

    NASA Astrophysics Data System (ADS)

    Malkawi, M. I.; Hawarey, M. M.

    2012-04-01

    Ever since the advent of the new era in presenting taught material in Electronic Form, international bodies, academic institutions, public sectors, as well as specialized entities in the private sector, globally, have all persevered to exploit the power of Distance Learning and e-Learning to disseminate the knowledge in Science and Art using the ubiquitous World Wide Web and its supporting Internet and Internetworking. Many Science & Education-sponsoring bodies, like UNESCO, the European Community, and the World Bank have been keen at funding multinational Distance Learning projects, many of which were directed at an educated audience in certain technical areas. Many countries around the Middle East have found a number of interested European partners to launch funding requests, and were generally successful in their solicitation efforts for the needed funds from these funding bodies. Albeit their intricacies in generating a wealth of knowledge in electronic form, many of the e-Learning schemas developed thus far, have only pursued their goals in the most conventional of ways; In essence, there had been little innovation introduced to gain anything, if any, above traditional classroom lecturing, other than, of course, the gained advantage of the simultaneous online testing and evaluation of the learned material by the examinees. In a sincere effort to change the way in which people look at the merits of e-Learning, and seek the most out of it, we shall propose a novel approach aimed at optimizing the learning outcomes of presented materials. In this paper we propose what shall henceforth be called as Iterative e-Learning. In Iterative e-Learning, as the name implies, a student uses some form of electronic media to access course material in a specific subject. At the end of each phase (Section, Chapter, Session, etc.) on a specific topic, the student is assessed online of how much he/she would have achieved before he/she would move on. If the student fails, due to some delinquency on a particular topic, the online process of e-Learning would take the student at some more detailed and deeper level on the subject matter where he/she had failed; once the student bridges the gap, to this end, then the ongoing e-Learning process would carry him/her further up the next level of the subject matter he/she is pursuing. This process is carried on at all levels of learning: section, chapter, and course level. A student may not progress to the next course level before he/she would pass the entire course at 80% or more. If in the process of repeating some section, chapter, or a whole course, then the student shall be required to score a higher percentage than the mere 80% he was required to attain the first time around; say 5% more per iteration he/she makes. Here, students going through Iterative e-Learning shall be allowed to move on to the next level of learning sooner than others if the time that takes them to learn a particular topic is shorter than would normally require an average student to expend, provided, of course, they make it through all the required assessment phases. Unlike the traditional ways of classroom or online lecturing, a student going through Iterative e-Learning is expected to achieve a quality of learning never before achieved via standard pedagogical methodologies. With Iterative e-Learning, it is expected that poorly accredited academic institutions will be able, for the first time, to produce the quality of graduates who are more capable of competing for highly paying jobs globally, and to be of the quality of contributing in more industry-supported economies.

  9. Targeted Structural Optimization with Additive Manufacturing of Metals

    NASA Technical Reports Server (NTRS)

    Burt, Adam; Hull, Patrick

    2015-01-01

    The recent advances in additive manufacturing (AM) of metals have now improved the state-of-the-art such that traditionally non-producible parts can be readily produced in a cost-effective way. Because of these advances in manufacturing technology, structural optimization techniques are well positioned to supplement and advance this new technology. The goal of this project is to develop a structural design, analysis, and optimization framework combined with AM to significantly light-weight the interior of metallic structures while maintaining the selected structural properties of the original solid. This is a new state-of-the-art capability to significantly reduce mass, while maintaining the structural integrity of the original design, something that can only be done with AM. In addition, this framework will couple the design, analysis, and fabrication process, meaning that what has been designed directly represents the produced part, thus closing the loop on the design cycle and removing human iteration between design and fabrication. This fundamental concept has applications from light-weighting launch vehicle components to in situ resource fabrication.

  10. Optimal design of reflectometer density profile measurements using a radar systems approach (invited) (abstract)

    NASA Astrophysics Data System (ADS)

    Doyle, E. J.; Kim, K. W.; Peebles, W. A.; Rhodes, T. L.

    1997-01-01

    Reflectometry is an attractive and versatile diagnostic technique that can address a wide range of measurement needs on fusion devices. However, progress in the area of profile measurement has been hampered by the lack of a well-understood basis for the optimum design and implementation of such systems. Such a design basis is provided by the realization that reflectometer systems utilized for density profile measurements are in fact specialized forms of radar systems. In this article five criteria are introduced by which reflectometer systems can be systematically designed for optimal performance: range resolution, spatial sampling, turbulence immunity, bandwidth optimization, and the need for adaptive data processing. Many of these criteria are familiar from radar systems analysis, and are applicable to reflectometry after allowance is made for differences stemming from the nature of the plasma target. These criteria are utilized to critically evaluate current reflectometer density profile techniques and indicate improvements that can impact current and next step devices, such as ITER.

  11. A computational study of thrust augmenting ejectors based on a viscous-inviscid approach

    NASA Technical Reports Server (NTRS)

    Lund, Thomas S.; Tavella, Domingo A.; Roberts, Leonard

    1987-01-01

    A viscous-inviscid interaction technique is advocated as both an efficient and accurate means of predicting the performance of two-dimensional thrust augmenting ejectors. The flow field is subdivided into a viscous region that contains the turbulent jet and an inviscid region that contains the ambient fluid drawn into the device. The inviscid region is computed with a higher-order panel method, while an integral method is used for the description of the viscous part. The strong viscous-inviscid interaction present within the ejector is simulated in an iterative process where the two regions influence each other en route to a converged solution. The model is applied to a variety of parametric and optimization studies involving ejectors having either one or two primary jets. The effects of nozzle placement, inlet and diffuser shape, free stream speed, and ejector length are investigated. The inlet shape for single jet ejectors is optimized for various free stream speeds and Reynolds numbers. Optimal nozzle tilt and location are identified for various dual-ejector configurations.

  12. Development and Application of the Collaborative Optimization Architecture in a Multidisciplinary Design Environment

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Kroo, I. M.

    1995-01-01

    Collaborative optimization is a design architecture applicable in any multidisciplinary analysis environment but specifically intended for large-scale distributed analysis applications. In this approach, a complex problem is hierarchically de- composed along disciplinary boundaries into a number of subproblems which are brought into multidisciplinary agreement by a system-level coordination process. When applied to problems in a multidisciplinary design environment, this scheme has several advantages over traditional solution strategies. These advantageous features include reducing the amount of information transferred between disciplines, the removal of large iteration-loops, allowing the use of different subspace optimizers among the various analysis groups, an analysis framework which is easily parallelized and can operate on heterogenous equipment, and a structural framework that is well-suited for conventional disciplinary organizations. In this article, the collaborative architecture is developed and its mathematical foundation is presented. An example application is also presented which highlights the potential of this method for use in large-scale design applications.

  13. Applications of artificial neural nets in structural mechanics

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo; Hajela, Prabhat

    1990-01-01

    A brief introduction to the fundamental of Neural Nets is given, followed by two applications in structural optimization. In the first case, the feasibility of simulating with neural nets the many structural analyses performed during optimization iterations was studied. In the second case, the concept of using neural nets to capture design expertise was studied.

  14. Applications of artificial neural nets in structural mechanics

    NASA Technical Reports Server (NTRS)

    Berke, L.; Hajela, P.

    1992-01-01

    A brief introduction to the fundamental of Neural Nets is given, followed by two applications in structural optimization. In the first case, the feasibility of simulating with neural nets the many structural analyses performed during optimization iterations was studied. In the second case, the concept of using neural nets to capture design expertise was studied.

  15. Optimal Chebyshev polynomials on ellipses in the complex plane

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland

    1989-01-01

    The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.

  16. Truncated Conjugate Gradient: An Optimal Strategy for the Analytical Evaluation of the Many-Body Polarization Energy and Forces in Molecular Simulations.

    PubMed

    Aviat, Félix; Levitt, Antoine; Stamm, Benjamin; Maday, Yvon; Ren, Pengyu; Ponder, Jay W; Lagardère, Louis; Piquemal, Jean-Philip

    2017-01-10

    We introduce a new class of methods, denoted as Truncated Conjugate Gradient(TCG), to solve the many-body polarization energy and its associated forces in molecular simulations (i.e. molecular dynamics (MD) and Monte Carlo). The method consists in a fixed number of Conjugate Gradient (CG) iterations. TCG approaches provide a scalable solution to the polarization problem at a user-chosen cost and a corresponding optimal accuracy. The optimality of the CG-method guarantees that the number of the required matrix-vector products are reduced to a minimum compared to other iterative methods. This family of methods is non-empirical, fully adaptive, and provides analytical gradients, avoiding therefore any energy drift in MD as compared to popular iterative solvers. Besides speed, one great advantage of this class of approximate methods is that their accuracy is systematically improvable. Indeed, as the CG-method is a Krylov subspace method, the associated error is monotonically reduced at each iteration. On top of that, two improvements can be proposed at virtually no cost: (i) the use of preconditioners can be employed, which leads to the Truncated Preconditioned Conjugate Gradient (TPCG); (ii) since the residual of the final step of the CG-method is available, one additional Picard fixed point iteration ("peek"), equivalent to one step of Jacobi Over Relaxation (JOR) with relaxation parameter ω, can be made at almost no cost. This method is denoted by TCG-n(ω). Black-box adaptive methods to find good choices of ω are provided and discussed. Results show that TPCG-3(ω) is converged to high accuracy (a few kcal/mol) for various types of systems including proteins and highly charged systems at the fixed cost of four matrix-vector products: three CG iterations plus the initial CG descent direction. Alternatively, T(P)CG-2(ω) provides robust results at a reduced cost (three matrix-vector products) and offers new perspectives for long polarizable MD as a production algorithm. The T(P)CG-1(ω) level provides less accurate solutions for inhomogeneous systems, but its applicability to well-conditioned problems such as water is remarkable, with only two matrix-vector product evaluations.

  17. Truncated Conjugate Gradient: An Optimal Strategy for the Analytical Evaluation of the Many-Body Polarization Energy and Forces in Molecular Simulations

    PubMed Central

    2016-01-01

    We introduce a new class of methods, denoted as Truncated Conjugate Gradient(TCG), to solve the many-body polarization energy and its associated forces in molecular simulations (i.e. molecular dynamics (MD) and Monte Carlo). The method consists in a fixed number of Conjugate Gradient (CG) iterations. TCG approaches provide a scalable solution to the polarization problem at a user-chosen cost and a corresponding optimal accuracy. The optimality of the CG-method guarantees that the number of the required matrix-vector products are reduced to a minimum compared to other iterative methods. This family of methods is non-empirical, fully adaptive, and provides analytical gradients, avoiding therefore any energy drift in MD as compared to popular iterative solvers. Besides speed, one great advantage of this class of approximate methods is that their accuracy is systematically improvable. Indeed, as the CG-method is a Krylov subspace method, the associated error is monotonically reduced at each iteration. On top of that, two improvements can be proposed at virtually no cost: (i) the use of preconditioners can be employed, which leads to the Truncated Preconditioned Conjugate Gradient (TPCG); (ii) since the residual of the final step of the CG-method is available, one additional Picard fixed point iteration (“peek”), equivalent to one step of Jacobi Over Relaxation (JOR) with relaxation parameter ω, can be made at almost no cost. This method is denoted by TCG-n(ω). Black-box adaptive methods to find good choices of ω are provided and discussed. Results show that TPCG-3(ω) is converged to high accuracy (a few kcal/mol) for various types of systems including proteins and highly charged systems at the fixed cost of four matrix-vector products: three CG iterations plus the initial CG descent direction. Alternatively, T(P)CG-2(ω) provides robust results at a reduced cost (three matrix-vector products) and offers new perspectives for long polarizable MD as a production algorithm. The T(P)CG-1(ω) level provides less accurate solutions for inhomogeneous systems, but its applicability to well-conditioned problems such as water is remarkable, with only two matrix-vector product evaluations. PMID:28068773

  18. A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

  19. Kinetic analysis of reactions of Si-based epoxy resins by near-infrared spectroscopy, 13C NMR and soft-hard modelling.

    PubMed

    Garrido, Mariano; Larrechi, Maria Soledad; Rius, F Xavier; Mercado, Luis Adolfo; Galià, Marina

    2007-02-05

    Soft- and hard-modelling strategy was applied to near-infrared spectroscopy data obtained from monitoring the reaction between glycidyloxydimethylphenyl silane, a silicon-based epoxy monomer, and aniline. On the basis of the pure soft-modelling approach and previous chemical knowledge, a kinetic model for the reaction was proposed. Then, multivariate curve resolution-alternating least squares optimization was carried out under a hard constraint, that compels the concentration profiles to fulfil the proposed kinetic model at each iteration of the optimization process. In this way, the concentration profiles of each species and the corresponding kinetic rate constants of the reaction, unpublished until now, were obtained. The results obtained were contrasted with 13C NMR. The joint interval test of slope and intercept for detecting bias was not significant (alpha=5%).

  20. Towards programmable plant genetic circuits.

    PubMed

    Medford, June I; Prasad, Ashok

    2016-07-01

    Synthetic biology enables the construction of genetic circuits with predictable gene functions in plants. Detailed quantitative descriptions of the transfer function or input-output function for genetic parts (promoters, 5' and 3' untranslated regions, etc.) are collected. These data are then used in computational simulations to determine their robustness and desired properties, thereby enabling the best components to be selected for experimental testing in plants. In addition, the process forms an iterative workflow which allows vast improvement to validated elements with sub-optimal function. These processes enable computational functions such as digital logic in living plants and follow the pathway of technological advances which took us from vacuum tubes to cell phones. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.

  1. Process and methodology of developing Cassini G and C Telemetry Dictionary

    NASA Technical Reports Server (NTRS)

    Kan, Edwin P.

    1994-01-01

    While the Cassini spacecraft telemetry design had taken on the new approach of 'packetized telemetry', the AACS (Attitude and Articulation Subsystem) had further extended into the design of 'mini-packets' in its telemetry system. Such telemetry packet and mini-packet design produced the AACS Telemetry Dictionary; iterations of the latter in turn provided changes to the former. The ultimate goals were to achieve maximum telemetry packing density, optimize the 'freshness' of more time-critical data, and to effect flexibility, i.e., multiple AACS data collection schemes, without needing to change the overall spacecraft telemetry mode. This paper describes such a systematic process and methodology, evidenced by various design products related to, or as part of, the AACS Telemetry Dictionary.

  2. A new iterative triclass thresholding technique in image segmentation.

    PubMed

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  3. Case studies on optimization problems in MATLAB and COMSOL multiphysics by means of the livelink

    NASA Astrophysics Data System (ADS)

    Ozana, Stepan; Pies, Martin; Docekal, Tomas

    2016-06-01

    LiveLink for COMSOL is a tool that integrates COMSOL Multiphysics with MATLAB to extend one's modeling with scripting programming in the MATLAB environment. It allows user to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and post processing. At first, the head script launches COMSOL with MATLAB and defines initial value of all parameters, refers to the objective function J described in the objective function and creates and runs the defined optimization task. Once the task is launches, the COMSOL model is being called in the iteration loop (from MATLAB environment by use of API interface), changing defined optimization parameters so that the objective function is minimized, using fmincon function to find a local or global minimum of constrained linear or nonlinear multivariable function. Once the minimum is found, it returns exit flag, terminates optimization and returns the optimized values of the parameters. The cooperation with MATLAB via LiveLink enhances a powerful computational environment with complex multiphysics simulations. The paper will introduce using of the LiveLink for COMSOL for chosen case studies in the field of technical cybernetics and bioengineering.

  4. Adaptive cockroach swarm algorithm

    NASA Astrophysics Data System (ADS)

    Obagbuwa, Ibidun C.; Abidoye, Ademola P.

    2017-07-01

    An adaptive cockroach swarm optimization (ACSO) algorithm is proposed in this paper to strengthen the existing cockroach swarm optimization (CSO) algorithm. The ruthless component of CSO algorithm is modified by the employment of blend crossover predator-prey evolution method which helps algorithm prevent any possible population collapse, maintain population diversity and create adaptive search in each iteration. The performance of the proposed algorithm on 16 global optimization benchmark function problems was evaluated and compared with the existing CSO, cuckoo search, differential evolution, particle swarm optimization and artificial bee colony algorithms.

  5. Threshold-driven optimization for reference-based auto-planning

    NASA Astrophysics Data System (ADS)

    Long, Troy; Chen, Mingli; Jiang, Steve; Lu, Weiguo

    2018-02-01

    We study threshold-driven optimization methodology for automatically generating a treatment plan that is motivated by a reference DVH for IMRT treatment planning. We present a framework for threshold-driven optimization for reference-based auto-planning (TORA). Commonly used voxel-based quadratic penalties have two components for penalizing under- and over-dosing of voxels: a reference dose threshold and associated penalty weight. Conventional manual- and auto-planning using such a function involves iteratively updating the preference weights while keeping the thresholds constant, an unintuitive and often inconsistent method for planning toward some reference DVH. However, driving a dose distribution by threshold values instead of preference weights can achieve similar plans with less computational effort. The proposed methodology spatially assigns reference DVH information to threshold values, and iteratively improves the quality of that assignment. The methodology effectively handles both sub-optimal and infeasible DVHs. TORA was applied to a prostate case and a liver case as a proof-of-concept. Reference DVHs were generated using a conventional voxel-based objective, then altered to be either infeasible or easy-to-achieve. TORA was able to closely recreate reference DVHs in 5-15 iterations of solving a simple convex sub-problem. TORA has the potential to be effective for auto-planning based on reference DVHs. As dose prediction and knowledge-based planning becomes more prevalent in the clinical setting, incorporating such data into the treatment planning model in a clear, efficient way will be crucial for automated planning. A threshold-focused objective tuning should be explored over conventional methods of updating preference weights for DVH-guided treatment planning.

  6. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.

  7. Can Structural Optimization Explain Slow Dynamics of Rocks?

    NASA Astrophysics Data System (ADS)

    Kim, H.; Vistisen, O.; Tencate, J. A.

    2009-12-01

    Slow dynamics is a recovery process that describes the return to an equilibrium state after some external energy input is applied and then removed. Experimental studies on many rocks have shown that a modest acoustic energy input results in slow dynamics. The recovery process of the stiffness has consistently been found to be linear to log(time) for a wide range of geomaterials and the time constants appear to be unique to the material [TenCate JA, Shankland TJ (1996), Geophys Res Lett 23, 3019-3022]. Measurements of this nonequilibrium effect in rocks (e.g. sandstones and limestones) have been linked directly to the cement holding the individual grains together [Darling TW, TenCate JA, Brown DW, Clausen B, Vogel SC (2004), Geophys Res Lett 31, L16604], also suggesting a potential link to porosity and permeability. Noting that slow dynamics consistently returns the overall stiffness of rocks to its maximum (original) state, it is hypothesized that the original state represents the global minimum strain energy state. Consequently the slow dynamics process represents the global minimization or optimization process. Structural optimization, which has been developed for engineering design, minimises the total strain energy by rearranging the material distribution [Kim H, Querin OM, Steven GP, Xie YM (2002), Struct Multidiscip Optim 24, 441-448]. The optimization process effectively rearranges the way the material is cemented. One of the established global optimization methods is simulated annealing (SA). Derived from cooling of metal to a thermal equilibrium, SA finds an optimum solution by iteratively moving the system towards the minimum energy state with a probability of 'uphill' moves. It has been established that the global optimum can be guaranteed by applying a log(time) linear cooling schedule [Hajek B (1988, Math Ops Res, 15, 311-329]. This work presents the original study of applying SA to the maximum stiffness optimization problem. Preliminary results indicate that the maximum stiffness solutions are achieved when using log(time) linear cooling schedule. The optimization history reveals that the overall stiffness of the structure is increased linearly to log(time). The results closely resemble the slow dynamics stiffness recovery of geomaterials and support the hypothesis that the slow dynamics is an optimization process for strain energy. [Work supported by the Department of Energy through the LANL/LDRD Program].

  8. Split Bregman's optimization method for image construction in compressive sensing

    NASA Astrophysics Data System (ADS)

    Skinner, D.; Foo, S.; Meyer-Bäse, A.

    2014-05-01

    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  9. The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.

    PubMed

    Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre

    2016-10-01

    Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.

  10. Spacecraft Attitude Maneuver Planning Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Kornfeld, Richard P.

    2004-01-01

    A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This approach for attitude path planning makes full use of a priori constraint knowledge and is computationally tractable enough to be executed onboard a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used as is or as an initial solution to initialize additional deterministic optimization algorithms. A number of representative case examples for time-fixed and time-varying conditions yielded search times that are typically on the order of minutes, thus demonstrating the viability of this method. This approach is applicable to all deep space and planet Earth missions requiring greater spacecraft autonomy, and greatly facilitates navigation and science observation planning.

  11. Multivariable frequency domain identification via 2-norm minimization

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    1992-01-01

    The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.

  12. Empirical OPC rule inference for rapid RET application

    NASA Astrophysics Data System (ADS)

    Kulkarni, Anand P.

    2006-10-01

    A given technological node (45 nm, 65 nm) can be expected to process thousands of individual designs. Iterative methods applied at the node consume valuable days in determining proper placement of OPC features, and manufacturing and testing mask correspondence to wafer patterns in a trial-and-error fashion for each design. Repeating this fabrication process for each individual design is a time-consuming and expensive process. We present a novel technique which sidesteps the requirement to iterate through the model-based OPC analysis and pattern verification cycle on subsequent designs at the same node. Our approach relies on the inference of rules from a correct pattern at the wafer surface it relates to the OPC and pre-OPC pattern layout files. We begin with an offline phase where we obtain a "gold standard" design file that has been fab-tested at the node with a prepared, post-OPC layout file that corresponds to the intended on-wafer pattern. We then run an offline analysis to infer rules to be used in this method. During the analysis, our method implicitly identifies contextual OPC strategies for optimal placement of RET features on any design at that node. Using these strategies, we can apply OPC to subsequent designs at the same node with accuracy comparable to the original design file but significantly smaller expected runtimes. The technique promises to offer a rapid and accurate complement to existing RET application strategies.

  13. A system level model for preliminary design of a space propulsion solid rocket motor

    NASA Astrophysics Data System (ADS)

    Schumacher, Daniel M.

    Preliminary design of space propulsion solid rocket motors entails a combination of components and subsystems. Expert design tools exist to find near optimal performance of subsystems and components. Conversely, there is no system level preliminary design process for space propulsion solid rocket motors that is capable of synthesizing customer requirements into a high utility design for the customer. The preliminary design process for space propulsion solid rocket motors typically builds on existing designs and pursues feasible rather than the most favorable design. Classical optimization is an extremely challenging method when dealing with the complex behavior of an integrated system. The complexity and combinations of system configurations make the number of the design parameters that are traded off unreasonable when manual techniques are used. Existing multi-disciplinary optimization approaches generally address estimating ratios and correlations rather than utilizing mathematical models. The developed system level model utilizes the Genetic Algorithm to perform the necessary population searches to efficiently replace the human iterations required during a typical solid rocket motor preliminary design. This research augments, automates, and increases the fidelity of the existing preliminary design process for space propulsion solid rocket motors. The system level aspect of this preliminary design process, and the ability to synthesize space propulsion solid rocket motor requirements into a near optimal design, is achievable. The process of developing the motor performance estimate and the system level model of a space propulsion solid rocket motor is described in detail. The results of this research indicate that the model is valid for use and able to manage a very large number of variable inputs and constraints towards the pursuit of the best possible design.

  14. Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem

    NASA Astrophysics Data System (ADS)

    Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.

    2018-03-01

    Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.

  15. Calibration of ITER Instant Power Neutron Monitors: Recommended Scenario of Experiments at the Reactor

    NASA Astrophysics Data System (ADS)

    Borisov, A. A.; Deryabina, N. A.; Markovskij, D. V.

    2017-12-01

    Instant power is a key parameter of the ITER. Its monitoring with an accuracy of a few percent is an urgent and challenging aspect of neutron diagnostics. In a series of works published in Problems of Atomic Science and Technology, Series: Thermonuclear Fusion under a common title, the step-by-step neutronics analysis was given to substantiate a calibration technique for the DT and DD modes of the ITER. A Gauss quadrature scheme, optimal for processing "expensive" experiments, is used for numerical integration of 235U and 238U detector responses to the point sources of 14-MeV neutrons. This approach allows controlling the integration accuracy in relation to the number of coordinate mesh points and thus minimizing the number of irradiations at the given uncertainty of the full monitor response. In the previous works, responses of the divertor and blanket monitors to the isotropic point sources of DT and DD neutrons in the plasma profile and to the models of real sources were calculated within the ITER model using the MCNP code. The neutronics analyses have allowed formulating the basic principles of calibration that are optimal for having the maximum accuracy at the minimum duration of in situ experiments at the reactor. In this work, scenarios of the preliminary and basic experimental ITER runs are suggested on the basis of those principles. It is proposed to calibrate the monitors only with DT neutrons and use correction factors to the DT mode calibration for the DD mode. It is reasonable to perform full calibration only with 235U chambers and calibrate 238U chambers by responses of the 235U chambers during reactor operation (cross-calibration). The divertor monitor can be calibrated using both direct measurement of responses at the Gauss positions of a point source and simplified techniques based on the concepts of equivalent ring sources and inverse response distributions, which will considerably reduce the amount of measurements. It is shown that the monitor based on the average responses of the horizontal and vertical neutron chambers remains spatially stable as the source moves and can be used in addition to the staff monitor at neutron fluxes in the detectors four orders of magnitude lower than on the first wall, where staff detectors are located. Owing to low background, detectors of neutron chambers do not need calibration in the reactor because it is actually determination of the absolute detector efficiency for 14-MeV neutrons, which is a routine out-of-reactor procedure.

  16. Iteration and Prototyping in Creating Technical Specifications.

    ERIC Educational Resources Information Center

    Flynt, John P.

    1994-01-01

    Claims that the development process for computer software can be greatly aided by the writers of specifications if they employ basic iteration and prototyping techniques. Asserts that computer software configuration management practices provide ready models for iteration and prototyping. (HB)

  17. DART system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.

    2005-08-01

    The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less

  18. Development of 2D deconvolution method to repair blurred MTSAT-1R visible imagery

    NASA Astrophysics Data System (ADS)

    Khlopenkov, Konstantin V.; Doelling, David R.; Okuyama, Arata

    2014-09-01

    Spatial cross-talk has been discovered in the visible channel data of the Multi-functional Transport Satellite (MTSAT)-1R. The slight image blurring is attributed to an imperfection in the mirror surface caused either by flawed polishing or a dust contaminant. An image processing methodology is described that employs a two-dimensional deconvolution routine to recover the original undistorted MTSAT-1R data counts. The methodology assumes that the dispersed portion of the signal is small and distributed randomly around the optical axis, which allows the image blurring to be described by a point spread function (PSF) based on the Gaussian profile. The PSF is described by 4 parameters, which are solved using a maximum likelihood estimator using coincident collocated MTSAT-2 images as truth. A subpixel image matching technique is used to align the MTSAT-2 pixels into the MTSAT-1R projection and to correct for navigation errors and cloud displacement due to the time and viewing geometry differences between the two satellite observations. An optimal set of the PSF parameters is derived by an iterative routine based on the 4-dimensional Powell's conjugate direction method that minimizes the difference between PSF-corrected MTSAT-1R and collocated MTSAT-2 images. This iterative approach is computationally intensive and was optimized analytically as well as by coding in assembly language incorporating parallel processing. The PSF parameters were found to be consistent over the 5-days of available daytime coincident MTSAT-1R and MTSAT-2 images, and can easily be applied to the MTSAT-1R imager pixel level counts to restore the original quality of the entire MTSAT-1R record.

  19. Multi-GPU configuration of 4D intensity modulated radiation therapy inverse planning using global optimization

    NASA Astrophysics Data System (ADS)

    Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo

    2018-01-01

    We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of 26% in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.

  20. Multi-GPU configuration of 4D intensity modulated radiation therapy inverse planning using global optimization.

    PubMed

    Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo

    2018-01-16

    We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of [Formula: see text] in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.

  1. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  2. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  3. Investigation of key parameters for the development of reliable ITER baseline operation scenarios using CORSICA

    NASA Astrophysics Data System (ADS)

    Kim, S. H.; Casper, T. A.; Snipes, J. A.

    2018-05-01

    ITER will demonstrate the feasibility of burning plasma operation by operating DT plasmas in the ELMy H-mode regime with a high ratio of fusion power gain Q ~ 10. 15 MA ITER baseline operation scenario has been studied using CORSICA, focusing on the entry to burn, flat-top burning plasma operation and exit from burn. The burning plasma operation for about 400 s of the current flat-top was achieved in H-mode within the various engineering constraints imposed by the poloidal field coil and power supply systems. The target fusion gain (Q ~ 10) was achievable in the 15 MA ITER baseline operation with a moderate amount of the total auxiliary heating power (~50 MW). It has been observed that the tungsten (W) concentration needs to be maintained low level (n w/n e up to the order of 1.0  ×  10-5) to avoid the radiative collapse and uncontrolled early termination of the discharge. The dynamic evolution of the density can modify the H-mode access unless the applied auxiliary heating power is significantly higher than the H-mode threshold power. Several qualitative sensitivity studies have been performed to provide guidance for further optimizing the plasma operation and performance. Increasing the density profile peaking factor was quite effective in increasing the alpha particle self-heating power and fusion power multiplication factor. Varying the combination of auxiliary heating power has shown that the fusion power multiplication factor can be reduced along with the increase in the total auxiliary heating power. As the 15 MA ITER baseline operation scenario requires full capacity of the coil and power supply systems, the operation window for H-mode access and shape modification was narrow. The updated ITER baseline operation scenarios developed in this work will become a basis for further optimization studies necessary along with the improvement in understanding the burning plasma physics.

  4. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    PubMed Central

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  5. Object tracking based on harmony search: comparative study

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; He, Xiao-Hai; Luo, Dai-Sheng; Yu, Yan-Mei

    2012-10-01

    Visual tracking can be treated as an optimization problem. A new meta-heuristic optimal algorithm, Harmony Search (HS), was first applied to perform visual tracking by Fourie et al. As the authors point out, many subjects are still required in ongoing research. Our work is a continuation of Fourie's study, with four prominent improved variations of HS, namely Improved Harmony Search (IHS), Global-best Harmony Search (GHS), Self-adaptive Harmony Search (SHS) and Differential Harmony Search (DHS) adopted into the tracking system. Their performances are tested and analyzed on multiple challenging video sequences. Experimental results show that IHS is best, with DHS ranking second among the four improved trackers when the iteration number is small. However, the differences between all four reduced gradually, along with the increasing number of iterations.

  6. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  7. Distance Metric Learning via Iterated Support Vector Machines.

    PubMed

    Zuo, Wangmeng; Wang, Faqiang; Zhang, David; Lin, Liang; Huang, Yuchi; Meng, Deyu; Zhang, Lei

    2017-07-11

    Distance metric learning aims to learn from the given training data a valid distance metric, with which the similarity between data samples can be more effectively evaluated for classification. Metric learning is often formulated as a convex or nonconvex optimization problem, while most existing methods are based on customized optimizers and become inefficient for large scale problems. In this paper, we formulate metric learning as a kernel classification problem with the positive semi-definite constraint, and solve it by iterated training of support vector machines (SVMs). The new formulation is easy to implement and efficient in training with the off-the-shelf SVM solvers. Two novel metric learning models, namely Positive-semidefinite Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the global optimality of their solutions. Experiments are conducted on general classification, face verification and person re-identification to evaluate our methods. Compared with the state-of-the-art approaches, our methods can achieve comparable classification accuracy and are efficient in training.

  8. A hybrid binary particle swarm optimization for large capacitated multi item multi level lot sizing (CMIMLLS) problem

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.

    2016-09-01

    The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.

  9. A dynamic programming-based particle swarm optimization algorithm for an inventory management problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Zeng, Ziqiang; Han, Bernard; Lei, Xiao

    2013-07-01

    This article presents a dynamic programming-based particle swarm optimization (DP-based PSO) algorithm for solving an inventory management problem for large-scale construction projects under a fuzzy random environment. By taking into account the purchasing behaviour and strategy under rules of international bidding, a multi-objective fuzzy random dynamic programming model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform fuzzy random parameters into fuzzy variables that are subsequently defuzzified by using an expected value operator with optimistic-pessimistic index. The iterative nature of the authors' model motivates them to develop a DP-based PSO algorithm. More specifically, their approach treats the state variables as hidden parameters. This in turn eliminates many redundant feasibility checks during initialization and particle updates at each iteration. Results and sensitivity analysis are presented to highlight the performance of the authors' optimization method, which is very effective as compared to the standard PSO algorithm.

  10. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    PubMed

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  11. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    PubMed Central

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  12. Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods

    PubMed Central

    Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.

    2013-01-01

    Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822

  13. Rapid optimization of multiple-burn rocket flights.

    NASA Technical Reports Server (NTRS)

    Brown, K. R.; Harrold, E. F.; Johnson, G. W.

    1972-01-01

    Different formulations of the fuel optimization problem for multiple burn trajectories are considered. It is shown that certain customary idealizing assumptions lead to an ill-posed optimization problem for which no solution exists. Several ways are discussed for avoiding such difficulties by more realistic problem statements. An iterative solution of the boundary value problem is presented together with efficient coast arc computations, the right end conditions for various orbital missions, and some test results.

  14. A Multi-Fidelity Surrogate Model for the Equation of State for Mixtures of Real Gases

    NASA Astrophysics Data System (ADS)

    Ouellet, Frederick; Park, Chanyoung; Koneru, Rahul; Balachandar, S.; Rollin, Bertrand

    2017-11-01

    The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the products of detonated explosives must be treated as real gases while the ideal gas equation of state is used for the ambient air. As the products expand outward, they mix with the air and create a region where both state equations must be satisfied. One of the most accurate, yet expensive, methods to handle this problem is an algorithm that iterates between both state equations until both pressure and thermal equilibrium are achieved inside of each computational cell. This work creates a multi-fidelity surrogate model to replace this process. This is achieved by using a Kriging model to produce a curve fit which interpolates selected data from the iterative algorithm. The surrogate is optimized for computing speed and model accuracy by varying the number of sampling points chosen to construct the model. The performance of the surrogate with respect to the iterative method is tested in simulations using a finite volume code. The model's computational speed and accuracy are analyzed to show the benefits of this novel approach. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA00023.

  15. Analysis of a Multi-Fidelity Surrogate for Handling Real Gas Equations of State

    NASA Astrophysics Data System (ADS)

    Ouellet, Frederick; Park, Chanyoung; Rollin, Bertrand; Balachandar, S.

    2017-06-01

    The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the detonation products of the explosive must be treated as real gas while the ideal gas equation of state is used for the surrounding air. As the products expand outward from the detonation point, they mix with ambient air and create a mixing region where both state equations must be satisfied. One of the most accurate, yet computationally expensive, methods to handle this problem is an algorithm that iterates between both equations of state until pressure and thermal equilibrium are achieved inside of each computational cell. This work aims to use a multi-fidelity surrogate model to replace this process. A Kriging model is used to produce a curve fit which interpolates selected data from the iterative algorithm using Bayesian statistics. We study the model performance with respect to the iterative method in simulations using a finite volume code. The model's (i) computational speed, (ii) memory requirements and (iii) computational accuracy are analyzed to show the benefits of this novel approach. Also, optimizing the combination of model accuracy and computational speed through the choice of sampling points is explained. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program as a Cooperative Agreement under the Predictive Science Academic Alliance Program under Contract No. DE-NA0002378.

  16. Generalizations of the Alternating Direction Method of Multipliers for Large-Scale and Distributed Optimization

    DTIC Science & Technology

    2014-05-01

    exact one is solved later — as- signed as step 5 of Algorithm 2 — because at each iteration , the ADMM updates the variables in the Gauss - Seidel ...Jacobi ADMM (see Algo- rithm 5 below). Unlike the Gauss - Seidel ADMM, the Jacobi ADMM updates all the 70 blocks in parallel at every iteration : xk+1i...that extending ADMM straightforwardly from the classic Gauss - Seidel setting to the Jacobi setting, from two blocks to multiple blocks, will preserve

  17. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGES

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  18. A Modularized Efficient Framework for Non-Markov Time Series Estimation

    NASA Astrophysics Data System (ADS)

    Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.

    2018-06-01

    We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.

  19. Simulation-Driven Design Approach for Design and Optimization of Blankholder

    NASA Astrophysics Data System (ADS)

    Sravan, Tatipala; Suddapalli, Nikshep R.; Johan, Pilthammar; Mats, Sigvant; Christian, Johansson

    2017-09-01

    Reliable design of stamping dies is desired for efficient and safe production. The design of stamping dies are today mostly based on casting feasibility, although it can also be based on criteria for fatigue, stiffness, safety, economy. Current work presents an approach that is built on Simulation Driven Design, enabling Design Optimization to address this issue. A structural finite element model of a stamping die, used to produce doors for Volvo V70/S80 car models, is studied. This die had developed cracks during its usage. To understand the behaviour of stress distribution in the stamping die, structural analysis of the die is conducted and critical regions with high stresses are identified. The results from structural FE-models are compared with analytical calculations pertaining to fatigue properties of the material. To arrive at an optimum design with increased stiffness and lifetime, topology and free-shape optimization are performed. In the optimization routine, identified critical regions of the die are set as design variables. Other optimization variables are set to maintain manufacturability of the resultant stamping die. Thereafter a CAD model is built based on geometrical results from topology and free-shape optimizations. Then the CAD model is subjected to structural analysis to visualize the new stress distribution. This process is iterated until a satisfactory result is obtained. The final results show reduction in stress levels by 70% with a more homogeneous distribution. Even though mass of the die is increased by 17 %, overall, a stiffer die with better lifetime is obtained. Finally, by reflecting on the entire process, a coordinated approach to handle such situations efficiently is presented.

  20. Parallelization of a hydrological model using the message passing interface

    USGS Publications Warehouse

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

Top