Optimal pattern synthesis for speech recognition based on principal component analysis
NASA Astrophysics Data System (ADS)
Korsun, O. N.; Poliyev, A. V.
2018-02-01
The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.
Optimization of global model composed of radial basis functions using the term-ranking approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Peng; Tao, Chao, E-mail: taochao@nju.edu.cn; Liu, Xiao-Jun
2014-03-15
A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.
Component-based integration of chemistry and optimization software.
Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L
2004-11-15
Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.
NASA Astrophysics Data System (ADS)
Zielinski, Jonas; Mindt, Hans-Wilfried; Düchting, Jan; Schleifenbaum, Johannes Henrich; Megahed, Mustafa
2017-12-01
Powder bed fusion additive manufacturing of titanium alloys is an interesting manufacturing route for many applications requiring high material strength combined with geometric complexity. Managing powder bed fusion challenges, including porosity, surface finish, distortions and residual stresses of as-built material, is the key to bringing the advantages of this process to production main stream. This paper discusses the application of experimental and numerical analysis towards optimizing the manufacturing process of a demonstration component. Powder characterization including assessment of the reusability, assessment of material consolidation and process window optimization is pursued prior to applying the identified optima to study the distortion and residual stresses of the demonstrator. Comparisons of numerical predictions with measurements show good correlations along the complete numerical chain.
Lörincz, András; Póczos, Barnabás
2003-06-01
In optimizations the dimension of the problem may severely, sometimes exponentially increase optimization time. Parametric function approximatiors (FAPPs) have been suggested to overcome this problem. Here, a novel FAPP, cost component analysis (CCA) is described. In CCA, the search space is resampled according to the Boltzmann distribution generated by the energy landscape. That is, CCA converts the optimization problem to density estimation. Structure of the induced density is searched by independent component analysis (ICA). The advantage of CCA is that each independent ICA component can be optimized separately. In turn, (i) CCA intends to partition the original problem into subproblems and (ii) separating (partitioning) the original optimization problem into subproblems may serve interpretation. Most importantly, (iii) CCA may give rise to high gains in optimization time. Numerical simulations illustrate the working of the algorithm.
Artificial immune system for effective properties optimization of magnetoelectric composites
NASA Astrophysics Data System (ADS)
Poteralski, Arkadiusz; Dziatkiewicz, Grzegorz
2018-01-01
The optimization problem of the effective properties for magnetoelectric composites is considered. The effective properties are determined by the semi-analytical Mori-Tanaka approach. The generalized Eshelby tensor components are calculated numerically by using the Gauss quadrature method for the integral representation of the inclusion problem. The linear magnetoelectric constitutive equation is used. The effect of orientation of the electromagnetic materials components is taken into account. The optimization problem of the design is formulated and the artificial immune system is applied to solve it.
Applications of numerical optimization methods to helicopter design problems: A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Applications of numerical optimization methods to helicopter design problems - A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1985-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Applications of numerical optimization methods to helicopter design problems - A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
NASA Astrophysics Data System (ADS)
Chen, Jing-Bo
2014-06-01
By using low-frequency components of the damped wavefield, Laplace-Fourier-domain full waveform inversion (FWI) can recover a long-wavelength velocity model from the original undamped seismic data lacking low-frequency information. Laplace-Fourier-domain modelling is an important foundation of Laplace-Fourier-domain FWI. Based on the numerical phase velocity and the numerical attenuation propagation velocity, a method for performing Laplace-Fourier-domain numerical dispersion analysis is developed in this paper. This method is applied to an average-derivative optimal scheme. The results show that within the relative error of 1 per cent, the Laplace-Fourier-domain average-derivative optimal scheme requires seven gridpoints per smallest wavelength and smallest pseudo-wavelength for both equal and unequal directional sampling intervals. In contrast, the classical five-point scheme requires 23 gridpoints per smallest wavelength and smallest pseudo-wavelength to achieve the same accuracy. Numerical experiments demonstrate the theoretical analysis.
USDA-ARS?s Scientific Manuscript database
This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...
Reliability-based optimization of maintenance scheduling of mechanical components under fatigue
Beaurepaire, P.; Valdebenito, M.A.; Schuëller, G.I.; Jensen, H.A.
2012-01-01
This study presents the optimization of the maintenance scheduling of mechanical components under fatigue loading. The cracks of damaged structures may be detected during non-destructive inspection and subsequently repaired. Fatigue crack initiation and growth show inherent variability, and as well the outcome of inspection activities. The problem is addressed under the framework of reliability based optimization. The initiation and propagation of fatigue cracks are efficiently modeled using cohesive zone elements. The applicability of the method is demonstrated by a numerical example, which involves a plate with two holes subject to alternating stress. PMID:23564979
Control strategy optimization of HVAC plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Facci, Andrea Luigi; Zanfardino, Antonella; Martini, Fabrizio
In this paper we present a methodology to optimize the operating conditions of heating, ventilation and air conditioning (HVAC) plants to achieve a higher energy efficiency in use. Semi-empiric numerical models of the plant components are used to predict their performances as a function of their set-point and the environmental and occupied space conditions. The optimization is performed through a graph-based algorithm that finds the set-points of the system components that minimize energy consumption and/or energy costs, while matching the user energy demands. The resulting model can be used with systems of almost any complexity, featuring both HVAC components andmore » energy systems, and is sufficiently fast to make it applicable to real-time setting.« less
Optimal Repair And Replacement Policy For A System With Multiple Components
2016-06-17
Numerical Demonstration To implement the linear program, we use the Python Programming Language (PSF 2016) with the Pyomo optimization modeling language...opre.1040.0133. Hart, W.E., C. Laird, J. Watson, D.L. Woodruff. 2012. Pyomo–optimization modeling in python , vol. 67. Springer Science & Business...Media. Hart, W.E., J. Watson, D.L. Woodruff. 2011. Pyomo: modeling and solving mathematical programs in python . Mathematical Programming Computation 3(3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, William E.; Siirola, John Daniel
We describe new capabilities for modeling MPEC problems within the Pyomo modeling software. These capabilities include new modeling components that represent complementar- ity conditions, modeling transformations for re-expressing models with complementarity con- ditions in other forms, and meta-solvers that apply transformations and numeric optimization solvers to optimize MPEC problems. We illustrate the breadth of Pyomo's modeling capabil- ities for MPEC problems, and we describe how Pyomo's meta-solvers can perform local and global optimization of MPEC problems.
Computer Based Porosity Design by Multi Phase Topology Optimization
NASA Astrophysics Data System (ADS)
Burblies, Andreas; Busse, Matthias
2008-02-01
A numerical simulation technique called Multi Phase Topology Optimization (MPTO) based on finite element method has been developed and refined by Fraunhofer IFAM during the last five years. MPTO is able to determine the optimum distribution of two or more different materials in components under thermal and mechanical loads. The objective of optimization is to minimize the component's elastic energy. Conventional topology optimization methods which simulate adaptive bone mineralization have got the disadvantage that there is a continuous change of mass by growth processes. MPTO keeps all initial material concentrations and uses methods adapted from molecular dynamics to find energy minimum. Applying MPTO to mechanically loaded components with a high number of different material densities, the optimization results show graded and sometimes anisotropic porosity distributions which are very similar to natural bone structures. Now it is possible to design the macro- and microstructure of a mechanical component in one step. Computer based porosity design structures can be manufactured by new Rapid Prototyping technologies. Fraunhofer IFAM has applied successfully 3D-Printing and Selective Laser Sintering methods in order to produce very stiff light weight components with graded porosities calculated by MPTO.
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
Optimal control for a tuberculosis model with undetected cases in Cameroon
NASA Astrophysics Data System (ADS)
Moualeu, D. P.; Weiser, M.; Ehrig, R.; Deuflhard, P.
2015-03-01
This paper considers the optimal control of tuberculosis through education, diagnosis campaign and chemoprophylaxis of latently infected. A mathematical model which includes important components such as undiagnosed infectious, diagnosed infectious, latently infected and lost-sight infectious is formulated. The model combines a frequency dependent and a density dependent force of infection for TB transmission. Through optimal control theory and numerical simulations, a cost-effective balance of two different intervention methods is obtained. Seeking to minimize the amount of money the government spends when tuberculosis remain endemic in the Cameroonian population, Pontryagin's maximum principle is used to characterize the optimal control. The optimality system is derived and solved numerically using the forward-backward sweep method (FBSM). Results provide a framework for designing cost-effective strategies for diseases with multiple intervention methods. It comes out that combining chemoprophylaxis and education, the burden of TB can be reduced by 80% in 10 years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir
An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less
(n, N) type maintenance policy for multi-component systems with failure interactions
NASA Astrophysics Data System (ADS)
Zhang, Zhuoqi; Wu, Su; Li, Binfeng; Lee, Seungchul
2015-04-01
This paper studies maintenance policies for multi-component systems in which failure interactions and opportunistic maintenance (OM) involve. This maintenance problem can be formulated as a Markov decision process (MDP). However, since an action set and state space in MDP exponentially expand as the number of components increase, traditional approaches are computationally intractable. To deal with curse of dimensionality, we decompose such a multi-component system into mutually influential single-component systems. Each single-component system is formulated as an MDP with the objective of minimising its long-run average maintenance cost. Under some reasonable assumptions, we prove the existence of the optimal (n, N) type policy for a single-component system. An algorithm to obtain the optimal (n, N) type policy is also proposed. Based on the proposed algorithm, we develop an iterative approximation algorithm to obtain an acceptable maintenance policy for a multi-component system. Numerical examples find that failure interactions and OM pose significant effects on a maintenance policy.
Fast principal component analysis for stacking seismic data
NASA Astrophysics Data System (ADS)
Wu, Juan; Bai, Min
2018-04-01
Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.
SRS modeling in high power CW fiber lasers for component optimization
NASA Astrophysics Data System (ADS)
Brochu, G.; Villeneuve, A.; Faucher, M.; Morin, M.; Trépanier, F.; Dionne, R.
2017-02-01
A CW kilowatt fiber laser numerical model has been developed taking into account intracavity stimulated Raman scattering (SRS). It uses the split-step Fourier method which is applied iteratively over several cavity round trips. The gain distribution is re-evaluated after each iteration with a standard CW model using an effective FBG reflectivity that quantifies the non-linear spectral leakage. This model explains why spectrally narrow output couplers produce more SRS than wider FBGs, as recently reported by other authors, and constitute a powerful tool to design optimized and innovative fiber components to push back the onset of SRS for a given fiber core diameter.
NASA Technical Reports Server (NTRS)
Bugbee, B.; Monje, O.
1992-01-01
Plant scientists have sought to maximize the yield of food crops since the beginning of agriculture. There are numerous reports of record food and biomass yields (per unit area) in all major crop plants, but many of the record yield reports are in error because they exceed the maximal theoretical rates of the component processes. In this article, we review the component processes that govern yield limits and describe how each process can be individually measured. This procedure has helped us validate theoretical estimates and determine what factors limit yields in optimal environments.
Investigation of Truncated Waveguides
NASA Technical Reports Server (NTRS)
Lourie, Nathan P.; Chuss, David T.; Henry, Ross M.; Wollack, Edward J.
2013-01-01
The design, fabrication, and performance of truncated circular and square waveguide cross-sections are presented. An emphasis is placed upon numerical and experimental validation of simple analytical formulae that describe the propagation properties of these structures. A test component, a 90-degree phase shifter, was fabricated and tested at 30 GHz. The concepts explored can be directly applied in the design, synthesis and optimization of components in the microwave to sub-millimeter wavebands.
Optimal Detection Range of RFID Tag for RFID-based Positioning System Using the k-NN Algorithm.
Han, Soohee; Kim, Junghwan; Park, Choung-Hwan; Yoon, Hee-Cheon; Heo, Joon
2009-01-01
Positioning technology to track a moving object is an important and essential component of ubiquitous computing environments and applications. An RFID-based positioning system using the k-nearest neighbor (k-NN) algorithm can determine the position of a moving reader from observed reference data. In this study, the optimal detection range of an RFID-based positioning system was determined on the principle that tag spacing can be derived from the detection range. It was assumed that reference tags without signal strength information are regularly distributed in 1-, 2- and 3-dimensional spaces. The optimal detection range was determined, through analytical and numerical approaches, to be 125% of the tag-spacing distance in 1-dimensional space. Through numerical approaches, the range was 134% in 2-dimensional space, 143% in 3-dimensional space.
Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2014-05-01
The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.
Harmonic component detection: Optimized Spectral Kurtosis for operational modal analysis
NASA Astrophysics Data System (ADS)
Dion, J.-L.; Tawfiq, I.; Chevallier, G.
2012-01-01
This work is a contribution in the field of Operational Modal Analysis to identify the modal parameters of mechanical structures using only measured responses. The study deals with structural responses coupled with harmonic components amplitude and frequency modulated in a short range, a common combination for mechanical systems with engines and other rotating machines in operation. These harmonic components generate misleading data interpreted erroneously by the classical methods used in OMA. The present work attempts to differentiate maxima in spectra stemming from harmonic components and structural modes. The detection method proposed is based on the so-called Optimized Spectral Kurtosis and compared with others definitions of Spectral Kurtosis described in the literature. After a parametric study of the method, a critical study is performed on numerical simulations and then on an experimental structure in operation in order to assess the method's performance.
Fast Optimization for Aircraft Descent and Approach Trajectory
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Schuet, Stefan; Brenton, J.; Timucin, Dogan; Smith, David; Kaneshige, John
2017-01-01
We address problem of on-line scheduling of the aircraft descent and approach trajectory. We formulate a general multiphase optimal control problem for optimization of the descent trajectory and review available methods of its solution. We develop a fast algorithm for solution of this problem using two key components: (i) fast inference of the dynamical and control variables of the descending trajectory from the low dimensional flight profile data and (ii) efficient local search for the resulting reduced dimensionality non-linear optimization problem. We compare the performance of the proposed algorithm with numerical solution obtained using optimal control toolbox General Pseudospectral Optimal Control Software. We present results of the solution of the scheduling problem for aircraft descent using novel fast algorithm and discuss its future applications.
Numerical Simulation of Sintering Process in Ceramic Powder Injection Moulded Components
NASA Astrophysics Data System (ADS)
Song, J.; Barriere, T.; Liu, B.; Gelin, J. C.
2007-05-01
A phenomenological model based on viscoplastic constitutive law is presented to describe the sintering process of ceramic components obtained by powder injection moulding. The parameters entering in the model are identified through sintering experiments in dilatometer with the proposed optimization method. The finite element simulations are carried out to predict the density variations and dimensional changes of the components during sintering. A simulation example on the sintering process of hip implant in alumina has been conducted. The simulation results have been compared with the experimental ones. A good agreement is obtained.
Processing-Related Issues for the Design and Lifing of SiC/SiC Hot-Section Components
NASA Technical Reports Server (NTRS)
DiCarlo, J.; Bhatt, R.; Morscher, G.; Yun, H. M.
2006-01-01
For successful SiC/SiC engine components, numerous process steps related to the fiber, fiber architecture, interphase coating, and matrix need to be optimized. Under recent NASA-sponsored programs, it was determined that many of these steps in their initial approach were inadequate, resulting in less than optimum thermostructural and life properties for the as-fabricated components. This presentation will briefly review many of these process issues, the key composite properties they degrade, their underlying mechanisms, and current process remedies developed by NASA and others.
Optimization of design parameters of low-energy buildings
NASA Astrophysics Data System (ADS)
Vala, Jiří; Jarošová, Petra
2017-07-01
Evaluation of temperature development and related consumption of energy required for heating, air-conditioning, etc. in low-energy buildings requires the proper physical analysis, covering heat conduction, convection and radiation, including beam and diffusive components of solar radiation, on all building parts and interfaces. The system approach and the Fourier multiplicative decomposition together with the finite element technique offers the possibility of inexpensive and robust numerical and computational analysis of corresponding direct problems, as well as of the optimization ones with several design variables, using the Nelder-Mead simplex method. The practical example demonstrates the correlation between such numerical simulations and the time series of measurements of energy consumption on a small family house in Ostrov u Macochy (35 km northern from Brno).
Szajek, Krzysztof; Wierszycki, Marcin
2016-01-01
Dental implant designing is a complex process which considers many limitations both biological and mechanical in nature. In earlier studies, a complete procedure for improvement of two-component dental implant was proposed. However, the optimization tasks carried out required assumption on representative load case, which raised doubts on optimality for the other load cases. This paper deals with verification of the optimal design in context of fatigue life and its main goal is to answer the question if the assumed load scenario (solely horizontal occlusal load) leads to the design which is also "safe" for oblique occlussal loads regardless the angle from an implant axis. The verification is carried out with series of finite element analyses for wide spectrum of physiologically justified loads. The design of experiment methodology with full factorial technique is utilized. All computations are done in Abaqus suite. The maximal Mises stress and normalized effective stress amplitude for various load cases are discussed and compared with the assumed "safe" limit (equivalent of fatigue life for 5e6 cycles). The obtained results proof that coronial-appical load component should be taken into consideration in the two component dental implant when fatigue life is optimized. However, its influence in the analyzed case is small and does not change the fact that the fatigue life improvement is observed for all components within whole range of analyzed loads.
NASA Astrophysics Data System (ADS)
Rosyidi, C. N.; Puspitoingrum, W.; Jauhari, W. A.; Suhardi, B.; Hamada, K.
2016-02-01
The specification of tolerances has a significant impact on the quality of product and final production cost. The company should carefully pay attention to the component or product tolerance so they can produce a good quality product at the lowest cost. Tolerance allocation has been widely used to solve problem in selecting particular process or supplier. But before merely getting into the selection process, the company must first make a plan to analyse whether the component must be made in house (make), to be purchased from a supplier (buy), or used the combination of both. This paper discusses an optimization model of process and supplier selection in order to minimize the manufacturing costs and the fuzzy quality loss. This model can also be used to determine the allocation of components to the selected processes or suppliers. Tolerance, process capability and production capacity are three important constraints that affect the decision. Fuzzy quality loss function is used in this paper to describe the semantic of the quality, in which the product quality level is divided into several grades. The implementation of the proposed model has been demonstrated by solving a numerical example problem that used a simple assembly product which consists of three components. The metaheuristic approach were implemented to OptQuest software from Oracle Crystal Ball in order to obtain the optimal solution of the numerical example.
Optimization design of energy deposition on single expansion ramp nozzle
NASA Astrophysics Data System (ADS)
Ju, Shengjun; Yan, Chao; Wang, Xiaoyong; Qin, Yupei; Ye, Zhifei
2017-11-01
Optimization design has been widely used in the aerodynamic design process of scramjets. The single expansion ramp nozzle is an important component for scramjets to produces most of thrust force. A new concept of increasing the aerodynamics of the scramjet nozzle with energy deposition is presented. The essence of the method is to create a heated region in the inner flow field of the scramjet nozzle. In the current study, the two-dimensional coupled implicit compressible Reynolds Averaged Navier-Stokes and Menter's shear stress transport turbulence model have been applied to numerically simulate the flow fields of the single expansion ramp nozzle with and without energy deposition. The numerical results show that the proposal of energy deposition can be an effective method to increase force characteristics of the scramjet nozzle, the thrust coefficient CT increase by 6.94% and lift coefficient CN decrease by 26.89%. Further, the non-dominated sorting genetic algorithm coupled with the Radial Basis Function neural network surrogate model has been employed to determine optimum location and density of the energy deposition. The thrust coefficient CT and lift coefficient CN are selected as objective functions, and the sampling points are obtained numerically by using a Latin hypercube design method. The optimized thrust coefficient CT further increase by 1.94%, meanwhile, the optimized lift coefficient CN further decrease by 15.02% respectively. At the same time, the optimized performances are in good and reasonable agreement with the numerical predictions. The findings suggest that scramjet nozzle design and performance can benefit from the application of energy deposition.
Numerical simulation of a battlefield Nd:YAG laser
NASA Astrophysics Data System (ADS)
Henriksson, Markus; Sjoqvist, Lars; Uhrwing, Thomas
2005-11-01
A numeric model has been developed to identify the critical components and parameters in improving the output beam quality of a flashlamp pumped Q-switched Nd:YAG laser with a folded Porro-prism resonator and polarization output coupling. The heating of the laser material and accompanying thermo-optical effects are calculated using the finite element partial differential equations package FEMLAB allowing arbitrary geometries and time distributions. The laser gain and the cavity are modeled with the physical optics simulation code GLAD including effects such as gain profile, thermal lensing and stress-induced birefringence, the Pockels cell rise-time and component aberrations. The model is intended to optimize the pumping process of an OPO providing radiation to be used for ranging, imaging or optical countermeasures.
Integrated design optimization research and development in an industrial environment
NASA Astrophysics Data System (ADS)
Kumar, V.; German, Marjorie D.; Lee, S.-J.
1989-04-01
An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.
Integrated design optimization research and development in an industrial environment
NASA Technical Reports Server (NTRS)
Kumar, V.; German, Marjorie D.; Lee, S.-J.
1989-01-01
An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.
NASA Astrophysics Data System (ADS)
Mozaffari, Ahmad; Vajedi, Mahyar; Chehresaz, Maryyeh; Azad, Nasser L.
2016-03-01
The urgent need to meet increasingly tight environmental regulations and new fuel economy requirements has motivated system science researchers and automotive engineers to take advantage of emerging computational techniques to further advance hybrid electric vehicle and plug-in hybrid electric vehicle (PHEV) designs. In particular, research has focused on vehicle powertrain system design optimization, to reduce the fuel consumption and total energy cost while improving the vehicle's driving performance. In this work, two different natural optimization machines, namely the synchronous self-learning Pareto strategy and the elitism non-dominated sorting genetic algorithm, are implemented for component sizing of a specific power-split PHEV platform with a Toyota plug-in Prius as the baseline vehicle. To do this, a high-fidelity model of the Toyota plug-in Prius is employed for the numerical experiments using the Autonomie simulation software. Based on the simulation results, it is demonstrated that Pareto-based algorithms can successfully optimize the design parameters of the vehicle powertrain.
NASA Astrophysics Data System (ADS)
Kredler, L.; Häußler, W.; Martin, N.; Böni, P.
The flux is still a major limiting factor in neutron research. For instruments being supplied by cold neutrons using neutron guides, both at present steady-state and at new spallation neutron sources, it is therefore important to optimize the instrumental setup and the neutron guidance. Optimization of neutron guide geometry and of the instrument itself can be performed by numerical ray-tracing simulations using existing open-access codes. In this paper, we discuss how such Monte Carlo simulations have been employed in order to plan improvements of the Neutron Resonant Spin Echo spectrometer RESEDA (FRM II, Germany) as well as the neutron guides before and within the instrument. The essential components have been represented with the help of the McStas ray-tracing package. The expected intensity has been tested by means of several virtual detectors, implemented in the simulation code. Comparison between simulations and preliminary measurements results shows good agreement and demonstrates the reliability of the numerical approach. These results will be taken into account in the planning of new components installed in the guide system.
NASA Astrophysics Data System (ADS)
Larabi, Mohamed Aziz; Mutschler, Dimitri; Mojtabi, Abdelkader
2016-06-01
Our present work focuses on the coupling between thermal diffusion and convection in order to improve the thermal gravitational separation of mixture components. The separation phenomenon was studied in a porous medium contained in vertical columns. We performed analytical and numerical simulations to corroborate the experimental measurements of the thermal diffusion coefficients of ternary mixture n-dodecane, isobutylbenzene, and tetralin obtained in microgravity in the international space station. Our approach corroborates the existing data published in the literature. The authors show that it is possible to quantify and to optimize the species separation for ternary mixtures. The authors checked, for ternary mixtures, the validity of the "forgotten effect hypothesis" established for binary mixtures by Furry, Jones, and Onsager. Two complete and different analytical resolution methods were used in order to describe the separation in terms of Lewis numbers, the separation ratios, the cross-diffusion coefficients, and the Rayleigh number. The analytical model is based on the parallel flow approximation. In order to validate this model, a numerical simulation was performed using the finite element method. From our new approach to vertical separation columns, new relations for mass fraction gradients and the optimal Rayleigh number for each component of the ternary mixture were obtained.
NASA Astrophysics Data System (ADS)
Chen, Lei; Liu, Xiang; Lian, Youyun; Cai, Laizhong
2015-09-01
The hypervapotron (HV), as an enhanced heat transfer technique, will be used for ITER divertor components in the dome region as well as the enhanced heat flux first wall panels. W-Cu brazing technology has been developed at SWIP (Southwestern Institute of Physics), and one W/CuCrZr/316LN component of 450 mm×52 mm×166 mm with HV cooling channels will be fabricated for high heat flux (HHF) tests. Before that a relevant analysis was carried out to optimize the structure of divertor component elements. ANSYS-CFX was used in CFD analysis and ABAQUS was adopted for thermal-mechanical calculations. Commercial code FE-SAFE was adopted to compute the fatigue life of the component. The tile size, thickness of tungsten tiles and the slit width among tungsten tiles were optimized and its HHF performances under International Thermonuclear Experimental Reactor (ITER) loading conditions were simulated. One brand new tokamak HL-2M with advanced divertor configuration is under construction in SWIP, where ITER-like flat-tile divertor components are adopted. This optimized design is expected to supply valuable data for HL-2M tokamak. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2011GB110001 and 2011GB110004)
Galievsky, Victor A; Stasheuski, Alexander S; Krylov, Sergey N
2017-10-17
The limit-of-detection (LOD) in analytical instruments with fluorescence detection can be improved by reducing noise of optical background. Efficiently reducing optical background noise in systems with spectrally nonuniform background requires complex optimization of an emission filter-the main element of spectral filtration. Here, we introduce a filter-optimization method, which utilizes an expression for the signal-to-noise ratio (SNR) as a function of (i) all noise components (dark, shot, and flicker), (ii) emission spectrum of the analyte, (iii) emission spectrum of the optical background, and (iv) transmittance spectrum of the emission filter. In essence, the noise components and the emission spectra are determined experimentally and substituted into the expression. This leaves a single variable-the transmittance spectrum of the filter-which is optimized numerically by maximizing SNR. Maximizing SNR provides an accurate way of filter optimization, while a previously used approach based on maximizing a signal-to-background ratio (SBR) is the approximation that can lead to much poorer LOD specifically in detection of fluorescently labeled biomolecules. The proposed filter-optimization method will be an indispensable tool for developing new and improving existing fluorescence-detection systems aiming at ultimately low LOD.
Numerical Simulation of Cast Distortion in Gas Turbine Engine Components
NASA Astrophysics Data System (ADS)
Inozemtsev, A. A.; Dubrovskaya, A. S.; Dongauser, K. A.; Trufanov, N. A.
2015-06-01
In this paper the process of multiple airfoilvanes manufacturing through investment casting is considered. The mathematical model of the full contact problem is built to determine stress strain state in a cast during the process of solidification. Studies are carried out in viscoelastoplastic statement. Numerical simulation of the explored process is implemented with ProCASTsoftware package. The results of simulation are compared with the real production process. By means of computer analysis the optimization of technical process parameters is done in order to eliminate the defect of cast walls thickness variation.
Optimization of an electrokinetic mixer for microfluidic applications.
Bockelmann, Hendryk; Heuveline, Vincent; Barz, Dominik P J
2012-06-01
This work is concerned with the investigation of the concentration fields in an electrokinetic micromixer and its optimization in order to achieve high mixing rates. The mixing concept is based on the combination of an alternating electrical excitation applied to a pressure-driven base flow in a meandering microchannel geometry. The electrical excitation induces a secondary electrokinetic velocity component, which results in a complex flow field within the meander bends. A mathematical model describing the physicochemical phenomena present within the micromixer is implemented in an in-house finite-element-method code. We first perform simulations comparable to experiments concerned with the investigation of the flow field in the bends. The comparison of the complex flow topology found in simulation and experiment reveals excellent agreement. Hence, the validated model and numerical schemes are employed for a numerical optimization of the micromixer performance. In detail, we optimize the secondary electrokinetic flow by finding the best electrical excitation parameters, i.e., frequency and amplitude, for a given waveform. Two optimized electrical excitations featuring a discrete and a continuous waveform are discussed with respect to characteristic time scales of our mixing problem. The results demonstrate that the micromixer is able to achieve high mixing degrees very rapidly.
Optimization of an electrokinetic mixer for microfluidic applications
Bockelmann, Hendryk; Heuveline, Vincent; Barz, Dominik P. J.
2012-01-01
This work is concerned with the investigation of the concentration fields in an electrokinetic micromixer and its optimization in order to achieve high mixing rates. The mixing concept is based on the combination of an alternating electrical excitation applied to a pressure-driven base flow in a meandering microchannel geometry. The electrical excitation induces a secondary electrokinetic velocity component, which results in a complex flow field within the meander bends. A mathematical model describing the physicochemical phenomena present within the micromixer is implemented in an in-house finite-element-method code. We first perform simulations comparable to experiments concerned with the investigation of the flow field in the bends. The comparison of the complex flow topology found in simulation and experiment reveals excellent agreement. Hence, the validated model and numerical schemes are employed for a numerical optimization of the micromixer performance. In detail, we optimize the secondary electrokinetic flow by finding the best electrical excitation parameters, i.e., frequency and amplitude, for a given waveform. Two optimized electrical excitations featuring a discrete and a continuous waveform are discussed with respect to characteristic time scales of our mixing problem. The results demonstrate that the micromixer is able to achieve high mixing degrees very rapidly. PMID:22712034
Optimal design of wide-view-angle waveplate used for polarimetric diagnosis of lithography system
NASA Astrophysics Data System (ADS)
Gu, Honggang; Jiang, Hao; Zhang, Chuanwei; Chen, Xiuguo; Liu, Shiyuan
2016-03-01
The diagnosis and control of the polarization aberrations is one of the main concerns in a hyper numerical aperture (NA) lithography system. Waveplates are basic and indispensable optical components in the polarimetric diagnosis tools for the immersion lithography system. The retardance of a birefringent waveplate is highly sensitive to the incident angle of the light, which makes the conventional waveplate not suitable to be applied in the polarimetric diagnosis for the immersion lithography system with a hyper NA. In this paper, we propose a method for the optimal design of a wideview- angle waveplate by combining two positive waveplates made from magnesium fluoride (MgF2) and two negative waveplates made from sapphire using the simulated annealing algorithm. Theoretical derivations and numerical simulations are performed and the results demonstrate that the maximum variation in the retardance of the optimally designed wide-view-angle waveplate is less than +/- 0.35° for a wide-view-angle range of +/- 20°.
The Researches on Damage Detection Method for Truss Structures
NASA Astrophysics Data System (ADS)
Wang, Meng Hong; Cao, Xiao Nan
2018-06-01
This paper presents an effective method to detect damage in truss structures. Numerical simulation and experimental analysis were carried out on a damaged truss structure under instantaneous excitation. The ideal excitation point and appropriate hammering method were determined to extract time domain signals under two working conditions. The frequency response function and principal component analysis were used for data processing, and the angle between the frequency response function vectors was selected as a damage index to ascertain the location of a damaged bar in the truss structure. In the numerical simulation, the time domain signal of all nodes was extracted to determine the location of the damaged bar. In the experimental analysis, the time domain signal of a portion of the nodes was extracted on the basis of an optimal sensor placement method based on the node strain energy coefficient. The results of the numerical simulation and experimental analysis showed that the damage detection method based on the frequency response function and principal component analysis could locate the damaged bar accurately.
A TV-constrained decomposition method for spectral CT
NASA Astrophysics Data System (ADS)
Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang
2017-03-01
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
Optimal control analysis of Ebola disease with control strategies of quarantine and vaccination.
Ahmad, Muhammad Dure; Usman, Muhammad; Khan, Adnan; Imran, Mudassar
2016-07-13
The 2014 Ebola epidemic is the largest in history, affecting multiple countries in West Africa. Some isolated cases were also observed in other regions of the world. In this paper, we introduce a deterministic SEIR type model with additional hospitalization, quarantine and vaccination components in order to understand the disease dynamics. Optimal control strategies, both in the case of hospitalization (with and without quarantine) and vaccination are used to predict the possible future outcome in terms of resource utilization for disease control and the effectiveness of vaccination on sick populations. Further, with the help of uncertainty and sensitivity analysis we also have identified the most sensitive parameters which effectively contribute to change the disease dynamics. We have performed mathematical analysis with numerical simulations and optimal control strategies on Ebola virus models. We used dynamical system tools with numerical simulations and optimal control strategies on our Ebola virus models. The original model, which allowed transmission of Ebola virus via human contact, was extended to include imperfect vaccination and quarantine. After the qualitative analysis of all three forms of Ebola model, numerical techniques, using MATLAB as a platform, were formulated and analyzed in detail. Our simulation results support the claims made in the qualitative section. Our model incorporates an important component of individuals with high risk level with exposure to disease, such as front line health care workers, family members of EVD patients and Individuals involved in burial of deceased EVD patients, rather than the general population in the affected areas. Our analysis suggests that in order for R 0 (i.e., the basic reproduction number) to be less than one, which is the basic requirement for the disease elimination, the transmission rate of isolated individuals should be less than one-fourth of that for non-isolated ones. Our analysis also predicts, we need high levels of medication and hospitalization at the beginning of an epidemic. Further, optimal control analysis of the model suggests the control strategies that may be adopted by public health authorities in order to reduce the impact of epidemics like Ebola.
A Two Element Laminar Flow Airfoil Optimized for Cruise. M.S. Thesis
NASA Technical Reports Server (NTRS)
Steen, Gregory Glen
1994-01-01
Numerical and experimental results are presented for a new two-element, fixed-geometry natural laminar flow airfoil optimized for cruise Reynolds numbers on the order of three million. The airfoil design consists of a primary element and an independent secondary element with a primary to secondary chord ratio of three to one. The airfoil was designed to improve the cruise lift-to-drag ratio while maintaining an appropriate landing capability when compared to conventional airfoils. The airfoil was numerically developed utilizing the NASA Langley Multi-Component Airfoil Analysis computer code running on a personal computer. Numerical results show a nearly 11.75 percent decrease in overall wing drag with no increase in stall speed at sailplane cruise conditions when compared to a wing based on an efficient single element airfoil. Section surface pressure, wake survey, transition location, and flow visualization results were obtained in the Texas A&M University Low Speed Wind Tunnel. Comparisons between the numerical and experimental data, the effects of the relative position and angle of the two elements, and Reynolds number variations from 8 x 10(exp 5) to 3 x 10(exp 6) for the optimum geometry case are presented.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Bhat, R. B.
1979-01-01
A finite element program is linked with a general purpose optimization program in a 'programing system' which includes user supplied codes that contain problem dependent formulations of the design variables, objective function and constraints. The result is a system adaptable to a wide spectrum of structural optimization problems. In a sample of numerical examples, the design variables are the cross-sectional dimensions and the parameters of overall shape geometry, constraints are applied to stresses, displacements, buckling and vibration characteristics, and structural mass is the objective function. Thin-walled, built-up structures and frameworks are included in the sample. Details of the system organization and characteristics of the component programs are given.
A real-time guidance algorithm for aerospace plane optimal ascent to low earth orbit
NASA Technical Reports Server (NTRS)
Calise, A. J.; Flandro, G. A.; Corban, J. E.
1989-01-01
Problems of onboard trajectory optimization and synthesis of suitable guidance laws for ascent to low Earth orbit of an air-breathing, single-stage-to-orbit vehicle are addressed. A multimode propulsion system is assumed which incorporates turbojet, ramjet, Scramjet, and rocket engines. An algorithm for generating fuel-optimal climb profiles is presented. This algorithm results from the application of the minimum principle to a low-order dynamic model that includes angle-of-attack effects and the normal component of thrust. Maximum dynamic pressure and maximum aerodynamic heating rate constraints are considered. Switching conditions are derived which, under appropriate assumptions, govern optimal transition from one propulsion mode to another. A nonlinear transformation technique is employed to derived a feedback controller for tracking the computed trajectory. Numerical results illustrate the nature of the resulting fuel-optimal climb paths.
Foong, Shaohui; Sun, Zhenglong
2016-08-12
In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.
Comparison between four dissimilar solar panel configurations
NASA Astrophysics Data System (ADS)
Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.
2017-12-01
Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.
MC ray-tracing optimization of lobster-eye focusing devices with RESTRAX
NASA Astrophysics Data System (ADS)
Šaroun, Jan; Kulda, Jiří
2006-11-01
The enhanced functionalities of the latest version of the RESTRAX software, providing a high-speed Monte Carlo (MC) ray-tracing code to represent a virtual three-axis neutron spectrometer, include representation of parabolic and elliptic guide profiles and facilities for numerical optimization of parameter values, characterizing the instrument components. As examples, we present simulations of a doubly focusing monochromator in combination with cold neutron guides and lobster-eye supermirror devices, concentrating a monochromatic beam to small sample volumes. A Levenberg-Marquardt minimization algorithm is used to optimize simultaneously several parameters of the monochromator and lobster-eye guides. We compare the performance of optimized configurations in terms of monochromatic neutron flux and energy spread and demonstrate the effect of lobster-eye optics on beam transformations in real and momentum subspaces.
Testing of Strategies for the Acceleration of the Cost Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponciroli, Roberto; Vilim, Richard B.
The general problem addressed in the Nuclear-Renewable Hybrid Energy System (N-R HES) project is finding the optimum economical dispatch (ED) and capacity planning solutions for the hybrid energy systems. In the present test-problem configuration, the N-R HES unit is composed of three electrical power-generating components, i.e. the Balance of Plant (BOP), the Secondary Energy Source (SES), and the Energy Storage (ES). In addition, there is an Industrial Process (IP), which is devoted to hydrogen generation. At this preliminary stage, the goal is to find the power outputs of each one of the N-R HES unit components (BOP, SES, ES) andmore » the IP hydrogen production level that maximizes the unit profit by simultaneously satisfying individual component operational constraints. The optimization problem is meant to be solved in the Risk Analysis Virtual Environment (RAVEN) framework. The dynamic response of the N-R HES unit components is simulated by using dedicated object-oriented models written in the Modelica modeling language. Though this code coupling provides for very accurate predictions, the ensuing optimization problem is characterized by a very large number of solution variables. To ease the computational burden and to improve the path to a converged solution, a method to better estimate the initial guess for the optimization problem solution was developed. The proposed approach led to the definition of a suitable Monte Carlo-based optimization algorithm (called the preconditioner), which provides an initial guess for the optimal N-R HES power dispatch and the optimal installed capacity for each one of the unit components. The preconditioner samples a set of stochastic power scenarios for each one of the N-R HES unit components, and then for each of them the corresponding value of a suitably defined cost function is evaluated. After having simulated a sufficient number of power histories, the configuration which ensures the highest profit is selected as the optimal one. The component physical dynamics are represented through suitable ramp constraints, which considerably simplify the numerical solving. In order to test the capabilities of the proposed approach, in the present report, the dispatch problem only is tackled, i.e. a reference unit configuration is assumed, and each one of the N-R HES unit components is assumed to have a fixed installed capacity. As for the next steps, the main improvement will concern the operation strategy of the ES facility. In particular, in order to describe a more realistic battery commitment strategy, the ES operation will be regulated according to the electricity price forecasts.« less
Roy, Venkat; Simonetto, Andrea; Leus, Geert
2018-06-01
We propose a sensor placement method for spatio-temporal field estimation based on a kriged Kalman filter (KKF) using a network of static or mobile sensors. The developed framework dynamically designs the optimal constellation to place the sensors. We combine the estimation error (for the stationary as well as non-stationary component of the field) minimization problem with a sparsity-enforcing penalty to design the optimal sensor constellation in an economic manner. The developed sensor placement method can be directly used for a general class of covariance matrices (ill-conditioned or well-conditioned) modelling the spatial variability of the stationary component of the field, which acts as a correlated observation noise, while estimating the non-stationary component of the field. Finally, a KKF estimator is used to estimate the field using the measurements from the selected sensing locations. Numerical results are provided to exhibit the feasibility of the proposed dynamic sensor placement followed by the KKF estimation method.
A perspective on future directions in aerospace propulsion system simulation
NASA Technical Reports Server (NTRS)
Miller, Brent A.; Szuch, John R.; Gaugler, Raymond E.; Wood, Jerry R.
1989-01-01
The design and development of aircraft engines is a lengthy and costly process using today's methodology. This is due, in large measure, to the fact that present methods rely heavily on experimental testing to verify the operability, performance, and structural integrity of components and systems. The potential exists for achieving significant speedups in the propulsion development process through increased use of computational techniques for simulation, analysis, and optimization. This paper outlines the concept and technology requirements for a Numerical Propulsion Simulation System (NPSS) that would provide capabilities to do interactive, multidisciplinary simulations of complete propulsion systems. By combining high performance computing hardware and software with state-of-the-art propulsion system models, the NPSS will permit the rapid calculation, assessment, and optimization of subcomponent, component, and system performance, durability, reliability and weight-before committing to building hardware.
NASA Astrophysics Data System (ADS)
Li, Yaning; Song, Juha; Ortiz, Christine; Boyce, Mary; Ortiz Group/DMSE/MIT Team; Boyce Group/ME/MIT Team
2011-03-01
Biological sutures are joints which connect two stiff skeletal or skeletal-like components. These joints possess a wavy geometry with a thin organic layer providing adhesion. Examples of biological sutures include mammalian skulls, the pelvic assembly of the armored fish Gasterosteus aculeatus (the three-spined stickleback), and the suture joints in the shell of the red-eared slider turtle. Biological sutures allow for movement and compliance, control stress concentrations, transmit loads, reduce fatigue stress and absorb energy. In this investigation, the mechanics of the role of suture geometry in providing a naturally optimized joint is explored. In particular, analytical and numerical micromechanical models of the suture joint are constructed. The anisotropic mechanical stiffness and strength are studied as a function of suture wavelength, amplitude and the material properties of the skeletal and organic components, revealing key insights into the optimized nature of these ubiquitous natural joints.
Nozzle Numerical Analysis Of The Scimitar Engine
NASA Astrophysics Data System (ADS)
Battista, F.; Marini, M.; Cutrone, L.
2011-05-01
This work describes part of the activities on the LAPCAT-II A2 vehicle, in which starting from the available conceptual vehicle design and the related pre- cooled turbo-ramjet engine called SCIMITAR, well- thought assumptions made for performance figures of different components during the iteration process within LAPCAT-I will be assessed in more detail. In this paper it is presented a numerical analysis aimed at the design optimization of the nozzle contour of the LAPCAT A2 SCIMITAR engine designed by Reaction Engines Ltd. (REL) (see Figure 1). In particular, nozzle shape optimization process is presented for cruise conditions. All the computations have been carried out by using the CIRA C3NS code in non equilibrium conditions. The effect of considering detailed or reduced chemical kinetic schemes has been analyzed with a particular focus on the production of pollutants. An analysis of engine performance parameters, such as thrust and combustion efficiency has been carried out.
Optimally analyzing and implementing of bolt fittings in steel structure based on ANSYS
NASA Astrophysics Data System (ADS)
Han, Na; Song, Shuangyang; Cui, Yan; Wu, Yongchun
2018-03-01
ANSYS simulation software for its excellent performance become outstanding one in Computer-aided Engineering (CAE) family, it is committed to the innovation of engineering simulation to help users to shorten the design process. First, a typical procedure to implement CAE was design. The framework of structural numerical analysis on ANSYS Technology was proposed. Then, A optimally analyzing and implementing of bolt fittings in beam-column join of steel structure was implemented by ANSYS, which was display the cloud chart of XY-shear stress, the cloud chart of YZ-shear stress and the cloud chart of Y component of stress. Finally, ANSYS software simulating results was compared with the measured results by the experiment. The result of ANSYS simulating and analyzing is reliable, efficient and optical. In above process, a structural performance's numerical simulating and analyzing model were explored for engineering enterprises' practice.
Recent progress in inverse methods in France
NASA Technical Reports Server (NTRS)
Bry, Pierre-Francois; Jacquotte, Olivier-Pierre; Lepape, Marie-Claire
1991-01-01
Given the current level of jet engine performance, improvement of the various turbomachinery components requires the use of advanced methods in aerodynamics, heat transfer, and aeromechanics. In particular, successful blade design can only be achieved via numerical design methods which make it possible to reach optimized solutions in a much shorter time than ever before. Two design methods which are currently being used throughout the French turbomachinery industry to obtain optimized blade geometries are presented. Examples are presented for compressor and turbine applications. The status of these methods as far as improvement and extension to new fields of applications is also reported.
A hybrid experimental-numerical technique for determining 3D velocity fields from planar 2D PIV data
NASA Astrophysics Data System (ADS)
Eden, A.; Sigurdson, M.; Mezić, I.; Meinhart, C. D.
2016-09-01
Knowledge of 3D, three component velocity fields is central to the understanding and development of effective microfluidic devices for lab-on-chip mixing applications. In this paper we present a hybrid experimental-numerical method for the generation of 3D flow information from 2D particle image velocimetry (PIV) experimental data and finite element simulations of an alternating current electrothermal (ACET) micromixer. A numerical least-squares optimization algorithm is applied to a theory-based 3D multiphysics simulation in conjunction with 2D PIV data to generate an improved estimation of the steady state velocity field. This 3D velocity field can be used to assess mixing phenomena more accurately than would be possible through simulation alone. Our technique can also be used to estimate uncertain quantities in experimental situations by fitting the gathered field data to a simulated physical model. The optimization algorithm reduced the root-mean-squared difference between the experimental and simulated velocity fields in the target region by more than a factor of 4, resulting in an average error less than 12% of the average velocity magnitude.
Coarse-graining errors and numerical optimization using a relative entropy framework
NASA Astrophysics Data System (ADS)
Chaimovich, Aviel; Shell, M. Scott
2011-03-01
The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.
On a New Optimization Approach for the Hydroforming of Defects-Free Tubular Metallic Parts
NASA Astrophysics Data System (ADS)
Caseiro, J. F.; Valente, R. A. F.; Andrade-Campos, A.; Jorge, R. M. Natal
2011-05-01
In the hydroforming of tubular metallic components, process parameters (internal pressure, axial feed and counter-punch position) must be carefully set in order to avoid defects in the final part. If, on one hand, excessive pressure may lead to thinning and bursting during forming, on the other hand insufficient pressure may lead to an inadequate filling of the die. Similarly, an excessive axial feeding may lead to the formation of wrinkles, whilst an inadequate one may cause thinning and, consequentially, bursting. These apparently contradictory targets are virtually impossible to achieve without trial-and-error procedures in industry, unless optimization approaches are formulated and implemented for complex parts. In this sense, an optimization algorithm based on differentialevolutionary techniques is presented here, capable of being applied in the determination of the adequate process parameters for the hydroforming of metallic tubular components of complex geometries. The Hybrid Differential Evolution Particle Swarm Optimization (HDEPSO) algorithm, combining the advantages of a number of well-known distinct optimization strategies, acts along with a general purpose implicit finite element software, and is based on the definition of a wrinkling and thinning indicators. If defects are detected, the algorithm automatically corrects the process parameters and new numerical simulations are performed in real time. In the end, the algorithm proved to be robust and computationally cost-effective, thus providing a valid design tool for the conformation of defects-free components in industry [1].
Multidimensional bioseparation with modular microfluidics
Chirica, Gabriela S.; Renzi, Ronald F.
2013-08-27
A multidimensional chemical separation and analysis system is described including a prototyping platform and modular microfluidic components capable of rapid and convenient assembly, alteration and disassembly of numerous candidate separation systems. Partial or total computer control of the separation system is possible. Single or multiple alternative processing trains can be tested, optimized and/or run in parallel. Examples related to the separation and analysis of human bodily fluids are given.
Prediction of Sliding Friction Coefficient Based on a Novel Hybrid Molecular-Mechanical Model.
Zhang, Xiaogang; Zhang, Yali; Wang, Jianmei; Sheng, Chenxing; Li, Zhixiong
2018-08-01
Sliding friction is a complex phenomenon which arises from the mechanical and molecular interactions of asperities when examined in a microscale. To reveal and further understand the effects of micro scaled mechanical and molecular components of friction coefficient on overall frictional behavior, a hybrid molecular-mechanical model is developed to investigate the effects of main factors, including different loads and surface roughness values, on the sliding friction coefficient in a boundary lubrication condition. Numerical modelling was conducted using a deterministic contact model and based on the molecular-mechanical theory of friction. In the contact model, with given external loads and surface topographies, the pressure distribution, real contact area, and elastic/plastic deformation of each single asperity contact were calculated. Then asperity friction coefficient was predicted by the sum of mechanical and molecular components of friction coefficient. The mechanical component was mainly determined by the contact width and elastic/plastic deformation, and the molecular component was estimated as a function of the contact area and interfacial shear stress. Numerical results were compared with experimental results and a good agreement was obtained. The model was then used to predict friction coefficients in different operating and surface conditions. Numerical results explain why applied load has a minimum effect on the friction coefficients. They also provide insight into the effect of surface roughness on the mechanical and molecular components of friction coefficients. It is revealed that the mechanical component dominates the friction coefficient when the surface roughness is large (Rq > 0.2 μm), while the friction coefficient is mainly determined by the molecular component when the surface is relatively smooth (Rq < 0.2 μm). Furthermore, optimal roughness values for minimizing the friction coefficient are recommended.
Optimizing the separation performance of a gas centrifuge
NASA Astrophysics Data System (ADS)
Wood, H. G.
1997-11-01
Gas centrifuges were originally developed for the enrichment of U^235 from naturally occurring uranium for the purpose of providing fuel for nuclear power reactors and material for nuclear weapons. This required the separation of a binary mixture composed of U^235 and U^238. Since the end of the cold war, a surplus of enriched uranium exists on the world market, but many centrifuge plants exist in numerous countries. These circumstances together with the growing demand for stable isotopes for chemical and physical research and in medical science has led to the exploration of alternate applications of gas centrifuge technology. In order to acieve these multi-component separations, existing centrifuges must be modified or new centrifuges must be designed. In either case, it is important to have models of the internal flow fields to predict the separation performance and algorithms to seek the optimal operating conditions of the centrifuges. Here, we use the Onsager pancake model of the internal flow field, and we present an optimization strategy which exploits a similarity parameter in the pancake model. Numerical examples will be presented.
NASA Astrophysics Data System (ADS)
AsséMat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank
In part I, we presented the theoretic foundations of the GOAT algorithm for the optimal control of quantum systems. Here in part II, we focus on several applications of GOAT to superconducting qubits architecture. First, we consider a control-Z gate on Xmons qubits with an Erf parametrization of the optimal pulse. We show that a fast and accurate gate can be obtained with only 16 parameters, as compared to hundreds of parameters required in other algorithms. We present numerical evidences that such parametrization should allow an efficient in-situ calibration of the pulse. Next, we consider the flux-tunable coupler by IBM. We show optimization can be carried out in a more realistic model of the system than was employed in the original study, which is expected to further simplify the calibration process. Moreover, GOAT reduced the complexity of the optimal pulse to only 6 Fourier components, composed with analytic wrappers.
Inverse transport calculations in optical imaging with subspace optimization algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu
2014-09-15
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less
NASA Technical Reports Server (NTRS)
Bainum, P. M.; Sellappan, R.
1977-01-01
The problem of optimal control with a minimum time criterion as applied to a single boom system for achieving two axis control is discussed. The special case where the initial conditions are such that the system can be driven to the equilibrium state with only a single switching maneuver in the bang-bang optimal sequence is analyzed. The system responses are presented. Application of the linear regulator problem for the optimal control of the telescoping system is extended to consider the effects of measurement and plant noises. The noise uncertainties are included with an application of the estimator - Kalman filter problem. Different schemes for measuring the components of the angular velocity are considered. Analytical results are obtained for special cases, and numerical results are presented for the general case.
Reliability enhancement through optimal burn-in
NASA Astrophysics Data System (ADS)
Kuo, W.
1984-06-01
A numerical reliability and cost model is defined for production line burn-in tests of electronic components. The necessity of burn-in is governed by upper and lower bounds: burn-in is mandatory for operation-critical or nonreparable component; no burn-in is needed when failure effects are insignificant or easily repairable. The model considers electronic systems in terms of a series of components connected by a single black box. The infant mortality rate is described with a Weibull distribution. Performance reaches a steady state after burn-in, and the cost of burn-in is a linear function for each component. A minimum cost is calculated among the costs and total time of burn-in, shop repair, and field repair, with attention given to possible losses in future sales from inadequate burn-in testing.
NASA Astrophysics Data System (ADS)
Yang, Xudong; Sun, Lingyu; Zhang, Cheng; Li, Lijun; Dai, Zongmiao; Xiong, Zhenkai
2018-03-01
The application of polymer composites as a substitution of metal is an effective approach to reduce vehicle weight. However, the final performance of composite structures is determined not only by the material types, structural designs and manufacturing process, but also by their mutual restrict. Hence, an integrated "material-structure-process-performance" method is proposed for the conceptual and detail design of composite components. The material selection is based on the principle of composite mechanics such as rule of mixture for laminate. The design of component geometry, dimension and stacking sequence is determined by parametric modeling and size optimization. The selection of process parameters are based on multi-physical field simulation. The stiffness and modal constraint conditions were obtained from the numerical analysis of metal benchmark under typical load conditions. The optimal design was found by multi-discipline optimization. Finally, the proposed method was validated by an application case of automotive hatchback using carbon fiber reinforced polymer. Compared with the metal benchmark, the weight of composite one reduces 38.8%, simultaneously, its torsion and bending stiffness increases 3.75% and 33.23%, respectively, and the first frequency also increases 44.78%.
A Suboptimal Power-Saving Transmission Scheme in Multiple Component Carrier Networks
NASA Astrophysics Data System (ADS)
Chung, Yao-Liang; Tsai, Zsehong
Power consumption due to transmissions in base stations (BSs) has been a major contributor to communication-related CO2 emissions. A power optimization model is developed in this study with respect to radio resource allocation and activation in a multiple Component Carrier (CC) environment. We formulate and solve the power-minimization problem of the BS transceivers for multiple-CC networks with carrier aggregation, while maintaining the overall system and respective users' utilities above minimum levels. The optimized power consumption based on this model can be viewed as a lower bound of that of other algorithms employed in practice. A suboptimal scheme with low computation complexity is proposed. Numerical results show that the power consumption of our scheme is much better than that of the conventional one in which all CCs are always active, if both schemes maintain the same required utilities.
NASA Astrophysics Data System (ADS)
Chen, Yunsheng; Lu, Xinghua
2018-05-01
The mechanical parts of the fuselage surface of the UAV are easily fractured by the action of the centrifugal load. In order to improve the compressive strength of UAV and guide the milling and planing of mechanical parts, a numerical simulation method of UAV fuselage compression under centrifugal load based on discrete element analysis method is proposed. The three-dimensional discrete element method is used to establish the splitting tensile force analysis model of the UAV fuselage under centrifugal loading. The micro-contact connection parameters of the UAV fuselage are calculated, and the yield tensile model of the mechanical components is established. The dynamic and static mechanical model of the aircraft fuselage milling is analyzed by the axial amplitude vibration frequency combined method. The correlation parameters of the cutting depth on the tool wear are obtained. The centrifugal load stress spectrum of the surface of the UAV is calculated. The meshing and finite element simulation of the rotor blade of the unmanned aerial vehicle is carried out to optimize the milling process. The test results show that the accuracy of the anti - compression numerical test of the UAV is higher by adopting the method, and the anti - fatigue damage capability of the unmanned aerial vehicle body is improved through the milling and processing optimization, and the mechanical strength of the unmanned aerial vehicle can be effectively improved.
NASA Astrophysics Data System (ADS)
Szczepanik, M.; Poteralski, A.
2016-11-01
The paper is devoted to an application of the evolutionary methods and the finite element method to the optimization of shell structures. Optimization of thickness of a car wheel (shell) by minimization of stress functional is considered. A car wheel geometry is built from three surfaces of revolution: the central surface with the holes destined for the fastening bolts, the surface of the ring of the wheel and the surface connecting the two mentioned earlier. The last one is subjected to the optimization process. The structures are discretized by triangular finite elements and subjected to the volume constraints. Using proposed method, material properties or thickness of finite elements are changing evolutionally and some of them are eliminated. As a result the optimal shape, topology and material or thickness of the structures are obtained. The numerical examples demonstrate that the method based on evolutionary computation is an effective technique for solving computer aided optimal design.
Global Design Optimization for Fluid Machinery Applications
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa
2000-01-01
Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1990-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Multidisciplinary optimization of aeroservoelastic systems using reduced-size models
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1992-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Simulating the injection of micellar solutions to recover diesel in a sand column.
Bernardez, Letícia A; Therrien, René; Lefebvre, René; Martel, Richard
2009-01-26
This paper presents numerical simulations of laboratory experiments where diesel, initially present at 18% residual saturation in a sand column, was recovered by injecting a micellar solution containing the surfactant Hostapur SAS-60 (SAS), and two alcohols, n-butanol (n-BuOH), and n-pentanol (n-PeOH). The micellar solution was developed and optimized for diesel recovery using phase diagrams and soil column experiments. Numerical simulations with the compositional simulator UTCHEM agree with the experimental results and show that the entire residual diesel in the sand column was recovered after the downward injection of 5 pore volumes of the micellar solution. Recovery of diesel occurs by enhanced solubility in the microemulsion phase and by mobilization. An additional series of simulations investigated the effects of phase transfer, alcohol partitioning, and component segregation on diesel recovery. These simulations indicate that diesel can be accurately represented in the model by a single component, but that the pseudo-component approach for active matter and the assumption of local phase equilibrium leads to an underestimation of diesel mobilization.
Simulating the injection of micellar solutions to recover diesel in a sand column
NASA Astrophysics Data System (ADS)
Bernardez, Letícia A.; Therrien, René; Lefebvre, René; Martel, Richard
2009-01-01
This paper presents numerical simulations of laboratory experiments where diesel, initially present at 18% residual saturation in a sand column, was recovered by injecting a micellar solution containing the surfactant Hostapur SAS-60 (SAS), and two alcohols, n-butanol ( n-BuOH), and n-pentanol ( n-PeOH). The micellar solution was developed and optimized for diesel recovery using phase diagrams and soil column experiments. Numerical simulations with the compositional simulator UTCHEM agree with the experimental results and show that the entire residual diesel in the sand column was recovered after the downward injection of 5 pore volumes of the micellar solution. Recovery of diesel occurs by enhanced solubility in the microemulsion phase and by mobilization. An additional series of simulations investigated the effects of phase transfer, alcohol partitioning, and component segregation on diesel recovery. These simulations indicate that diesel can be accurately represented in the model by a single component, but that the pseudo-component approach for active matter and the assumption of local phase equilibrium leads to an underestimation of diesel mobilization.
Present State of the Art of Composite Fabric Forming: Geometrical and Mechanical Approaches
Cherouat, Abel; Borouchaki, Houman
2009-01-01
Continuous fibre reinforced composites are now firmly established engineering materials for the manufacture of components in the automotive and aerospace industries. In this respect, composite fabrics provide flexibility in the design manufacture. The ability to define the ply shapes and material orientation has allowed engineers to optimize the composite properties of the parts. The formulation of new numerical models for the simulation of the composite forming processes must allow for reduction in the delay in manufacturing and an optimization of costs in an integrated design approach. We propose two approaches to simulate the deformation of woven fabrics: geometrical and mechanical approaches.
Automated Generation of Finite-Element Meshes for Aircraft Conceptual Design
NASA Technical Reports Server (NTRS)
Li, Wu; Robinson, Jay
2016-01-01
This paper presents a novel approach for automated generation of fully connected finite-element meshes for all internal structural components and skins of a given wing-body geometry model, controlled by a few conceptual-level structural layout parameters. Internal structural components include spars, ribs, frames, and bulkheads. Structural layout parameters include spar/rib locations in wing chordwise/spanwise direction and frame/bulkhead locations in longitudinal direction. A simple shell thickness optimization problem with two load conditions is used to verify versatility and robustness of the automated meshing process. The automation process is implemented in ModelCenter starting from an OpenVSP geometry and ending with a NASTRAN 200 solution. One subsonic configuration and one supersonic configuration are used for numerical verification. Two different structural layouts are constructed for each configuration and five finite-element meshes of different sizes are generated for each layout. The paper includes various comparisons of solutions of 20 thickness optimization problems, as well as discussions on how the optimal solutions are affected by the stress constraint bound and the initial guess of design variables.
Monte Carlo methods and their analysis for Coulomb collisions in multicomponent plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobylev, A.V., E-mail: alexander.bobylev@kau.se; Potapenko, I.F., E-mail: firena@yandex.ru
2013-08-01
Highlights: •A general approach to Monte Carlo methods for multicomponent plasmas is proposed. •We show numerical tests for the two-component (electrons and ions) case. •An optimal choice of parameters for speeding up the computations is discussed. •A rigorous estimate of the error of approximation is proved. -- Abstract: A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau–Fokker–Planck equations by Boltzmann equations of quasi-Maxwellian kind. It means that the total collision frequency for the corresponding Boltzmann equation does not depend on the velocities. This allows to make the simulation processmore » very simple since the collision pairs can be chosen arbitrarily, without restriction. It is shown that this approach includes the well-known methods of Takizuka and Abe (1977) [12] and Nanbu (1997) as particular cases, and generalizes the approach of Bobylev and Nanbu (2000). The numerical scheme of this paper is simpler than the schemes by Takizuka and Abe [12] and by Nanbu. We derive it for the general case of multicomponent plasmas and show some numerical tests for the two-component (electrons and ions) case. An optimal choice of parameters for speeding up the computations is also discussed. It is also proved that the order of approximation is not worse than O(√(ε)), where ε is a parameter of approximation being equivalent to the time step Δt in earlier methods. A similar estimate is obtained for the methods of Takizuka and Abe and Nanbu.« less
NASA Astrophysics Data System (ADS)
Hu, Enzhu; Bartsev, Sergey I.; Zhao, Ming; Liu, Professor Hong
The conceptual scheme of an experimental bioregenerative life support system (BLSS) for planetary exploration was designed, which consisted of four elements - human metabolism, higher plants, silkworms and waste treatment. 15 kinds of higher plants, such as wheat, rice, soybean, lettuce, mulberry, et al., were selected as regenerative component of BLSS providing the crew with air, water, and vegetable food. Silkworms, which producing animal nutrition for crews, were fed by mulberry-leaves during the first three instars, and lettuce leaves last two instars. The inedible biomass of higher plants, human wastes and silkworm feces were composted into soil like substrate, which can be reused by higher plants cultivation. Salt, sugar and some household material such as soap, shampoo would be provided from outside. To support the steady state of BLSS the same amount and elementary composition of dehydrated wastes were removed periodically. The balance of matter flows between BLSS components was described by the system of algebraic equations. The mass flows between the components were optimized by EXCEL spreadsheets and using Solver. The numerical method used in this study was Newton's method.
Evaluation of on-line pulse control for vibration suppression in flexible spacecraft
NASA Technical Reports Server (NTRS)
Masri, Sami F.
1987-01-01
A numerical simulation was performed, by means of a large-scale finite element code capable of handling large deformations and/or nonlinear behavior, to investigate the suitability of the nonlinear pulse-control algorithm to suppress the vibrations induced in the Spacecraft Control Laboratory Experiment (SCOLE) components under realistic maneuvers. Among the topics investigated were the effects of various control parameters on the efficiency and robustness of the vibration control algorithm. Advanced nonlinear control techniques were applied to an idealized model of some of the SCOLE components to develop an efficient algorithm to determine the optimal locations of point actuators, considering the hardware on the SCOLE project as distributed in nature. The control was obtained from a quadratic optimization criterion, given in terms of the state variables of the distributed system. An experimental investigation was performed on a model flexible structure resembling the essential features of the SCOLE components, and electrodynamic and electrohydraulic actuators were used to investigate the applicability of the control algorithm with such devices in addition to mass-ejection pulse generators using compressed air.
Coarse-graining errors and numerical optimization using a relative entropy framework.
Chaimovich, Aviel; Shell, M Scott
2011-03-07
The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework. © 2011 American Institute of Physics.
NASA Astrophysics Data System (ADS)
Farano, Mirko; Cherubini, Stefania; Robinet, Jean-Christophe; De Palma, Pietro
2016-12-01
Subcritical transition in plane Poiseuille flow is investigated by means of a Lagrange-multiplier direct-adjoint optimization procedure with the aim of finding localized three-dimensional perturbations optimally growing in a given time interval (target time). Space localization of these optimal perturbations (OPs) is achieved by choosing as objective function either a p-norm (with p\\gg 1) of the perturbation energy density in a linear framework; or the classical (1-norm) perturbation energy, including nonlinear effects. This work aims at analyzing the structure of linear and nonlinear localized OPs for Poiseuille flow, and comparing their transition thresholds and scenarios. The nonlinear optimization approach provides three types of solutions: a weakly nonlinear, a hairpin-like and a highly nonlinear optimal perturbation, depending on the value of the initial energy and the target time. The former shows localization only in the wall-normal direction, whereas the latter appears much more localized and breaks the spanwise symmetry found at lower target times. Both solutions show spanwise inclined vortices and large values of the streamwise component of velocity already at the initial time. On the other hand, p-norm optimal perturbations, although being strongly localized in space, keep a shape similar to linear 1-norm optimal perturbations, showing streamwise-aligned vortices characterized by low values of the streamwise velocity component. When used for initializing direct numerical simulations, in most of the cases nonlinear OPs provide the most efficient route to transition in terms of time to transition and initial energy, even when they are less localized in space than the p-norm OP. The p-norm OP follows a transition path similar to the oblique transition scenario, with slightly oscillating streaks which saturate and eventually experience secondary instability. On the other hand, the nonlinear OP rapidly forms large-amplitude bent streaks and skips the phases of streak saturation, providing a contemporary growth of all of the velocity components due to strong nonlinear coupling.
A robust component mode synthesis method for stochastic damped vibroacoustics
NASA Astrophysics Data System (ADS)
Tran, Quang Hung; Ouisse, Morvan; Bouhaddi, Noureddine
2010-01-01
In order to reduce vibrations or sound levels in industrial vibroacoustic problems, the low-cost and efficient way consists in introducing visco- and poro-elastic materials either on the structure or on cavity walls. Depending on the frequency range of interest, several numerical approaches can be used to estimate the behavior of the coupled problem. In the context of low frequency applications related to acoustic cavities with surrounding vibrating structures, the finite elements method (FEM) is one of the most efficient techniques. Nevertheless, industrial problems lead to large FE models which are time-consuming in updating or optimization processes. A classical way to reduce calculation time is the component mode synthesis (CMS) method, whose classical formulation is not always efficient to predict dynamical behavior of structures including visco-elastic and/or poro-elastic patches. Then, to ensure an efficient prediction, the fluid and structural bases used for the model reduction need to be updated as a result of changes in a parametric optimization procedure. For complex models, this leads to prohibitive numerical costs in the optimization phase or for management and propagation of uncertainties in the stochastic vibroacoustic problem. In this paper, the formulation of an alternative CMS method is proposed and compared to classical ( u, p) CMS method: the Ritz basis is completed with static residuals associated to visco-elastic and poro-elastic behaviors. This basis is also enriched by the static response of residual forces due to structural modifications, resulting in a so-called robust basis, also adapted to Monte Carlo simulations for uncertainties propagation using reduced models.
Gómez, Pablo; Patel, Rita R.; Alexiou, Christoph; Bohr, Christopher; Schützenberger, Anne
2017-01-01
Motivation Human voice is generated in the larynx by the two oscillating vocal folds. Owing to the limited space and accessibility of the larynx, endoscopic investigation of the actual phonatory process in detail is challenging. Hence the biomechanics of the human phonatory process are still not yet fully understood. Therefore, we adapt a mathematical model of the vocal folds towards vocal fold oscillations to quantify gender and age related differences expressed by computed biomechanical model parameters. Methods The vocal fold dynamics are visualized by laryngeal high-speed videoendoscopy (4000 fps). A total of 33 healthy young subjects (16 females, 17 males) and 11 elderly subjects (5 females, 6 males) were recorded. A numerical two-mass model is adapted to the recorded vocal fold oscillations by varying model masses, stiffness and subglottal pressure. For adapting the model towards the recorded vocal fold dynamics, three different optimization algorithms (Nelder–Mead, Particle Swarm Optimization and Simulated Bee Colony) in combination with three cost functions were considered for applicability. Gender differences and age-related kinematic differences reflected by the model parameters were analyzed. Results and conclusion The biomechanical model in combination with numerical optimization techniques allowed phonatory behavior to be simulated and laryngeal parameters involved to be quantified. All three optimization algorithms showed promising results. However, only one cost function seems to be suitable for this optimization task. The gained model parameters reflect the phonatory biomechanics for men and women well and show quantitative age- and gender-specific differences. The model parameters for younger females and males showed lower subglottal pressures, lower stiffness and higher masses than the corresponding elderly groups. Females exhibited higher subglottal pressures, smaller oscillation masses and larger stiffness than the corresponding similar aged male groups. Optimizing numerical models towards vocal fold oscillations is useful to identify underlying laryngeal components controlling the phonatory process. PMID:29121085
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-01-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-05-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.
NASA Astrophysics Data System (ADS)
Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan
2018-01-01
The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.
A new method to optimize natural convection heat sinks
NASA Astrophysics Data System (ADS)
Lampio, K.; Karvinen, R.
2017-08-01
The performance of a heat sink cooled by natural convection is strongly affected by its geometry, because buoyancy creates flow. Our model utilizes analytical results of forced flow and convection, and only conduction in a solid, i.e., the base plate and fins, is solved numerically. Sufficient accuracy for calculating maximum temperatures in practical applications is proved by comparing the results of our model with some simple analytical and computational fluid dynamics (CFD) solutions. An essential advantage of our model is that it cuts down on calculation CPU time by many orders of magnitude compared with CFD. The shorter calculation time makes our model well suited for multi-objective optimization, which is the best choice for improving heat sink geometry, because many geometrical parameters with opposite effects influence the thermal behavior. In multi-objective optimization, optimal locations of components and optimal dimensions of the fin array can be found by simultaneously minimizing the heat sink maximum temperature, size, and mass. This paper presents the principles of the particle swarm optimization (PSO) algorithm and applies it as a basis for optimizing existing heat sinks.
NASA Astrophysics Data System (ADS)
Jiang, Daijun; Li, Zhiyuan; Liu, Yikan; Yamamoto, Masahiro
2017-05-01
In this paper, we first establish a weak unique continuation property for time-fractional diffusion-advection equations. The proof is mainly based on the Laplace transform and the unique continuation properties for elliptic and parabolic equations. The result is weaker than its parabolic counterpart in the sense that we additionally impose the homogeneous boundary condition. As a direct application, we prove the uniqueness for an inverse problem on determining the spatial component in the source term by interior measurements. Numerically, we reformulate our inverse source problem as an optimization problem, and propose an iterative thresholding algorithm. Finally, several numerical experiments are presented to show the accuracy and efficiency of the algorithm.
Numerical Simulation of the Francis Turbine and CAD used to Optimized the Runner Design (2nd).
NASA Astrophysics Data System (ADS)
Sutikno, Priyono
2010-06-01
Hydro Power is the most important renewable energy source on earth. The water is free of charge and with the generation of electric energy in a Hydroelectric Power station the production of green house gases (mainly CO2) is negligible. Hydro Power Generation Stations are long term installations and can be used for 50 years and more, care must be taken to guarantee a smooth and safe operation over the years. Maintenance is necessary and critical parts of the machines have to be replaced if necessary. Within modern engineering the numerical flow simulation plays an important role in order to optimize the hydraulic turbine in conjunction with connected components of the plant. Especially for rehabilitation and upgrading existing Power Plants important point of concern are to predict the power output of turbine, to achieve maximum hydraulic efficiency, to avoid or to minimize cavitations, to avoid or to minimized vibrations in whole range operation. Flow simulation can help to solve operational problems and to optimize the turbo machinery for hydro electric generating stations or their component through, intuitive optimization, mathematical optimization, parametric design, the reduction of cavitations through design, prediction of draft tube vortex, trouble shooting by using the simulation. The classic design through graphic-analytical method is cumbersome and can't give in evidence the positive or negative aspects of the designing options. So it was obvious to have imposed as necessity the classical design methods to an adequate design method using the CAD software. There are many option chose during design calculus in a specific step of designing may be verified in ensemble and detail form a point of view. The final graphic post processing would be realized only for the optimal solution, through a 3 D representation of the runner as a whole for the final approval geometric shape. In this article it was investigated the redesign of the hydraulic turbine's runner, medium head Francis type, with following value for the most important parameter, the rated specific speed ns.
Statistical validity of using ratio variables in human kinetics research.
Liu, Yuanlong; Schutz, Robert W
2003-09-01
The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.
Calculation of Sensitivity Derivatives in an MDAO Framework
NASA Technical Reports Server (NTRS)
Moore, Kenneth T.
2012-01-01
During gradient-based optimization of a system, it is necessary to generate the derivatives of each objective and constraint with respect to each design parameter. If the system is multidisciplinary, it may consist of a set of smaller "components" with some arbitrary data interconnection and process work ow. Analytical derivatives in these components can be used to improve the speed and accuracy of the derivative calculation over a purely numerical calculation; however, a multidisciplinary system may include both components for which derivatives are available and components for which they are not. Three methods to calculate the sensitivity of a mixed multidisciplinary system are presented: the finite difference method, where the derivatives are calculated numerically; the chain rule method, where the derivatives are successively cascaded along the system's network graph; and the analytic method, where the derivatives come from the solution of a linear system of equations. Some improvements to these methods, to accommodate mixed multidisciplinary systems, are also presented; in particular, a new method is introduced to allow existing derivatives to be used inside of finite difference. All three methods are implemented and demonstrated in the open-source MDAO framework OpenMDAO. It was found that there are advantages to each of them depending on the system being solved.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.
1989-01-01
A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.
NASA Astrophysics Data System (ADS)
Piccininni, A.; Palumbo, G.; Franco, A. Lo; Sorgente, D.; Tricarico, L.; Russello, G.
2018-05-01
The continuous research for lightweight components for transport applications to reduce the harmful emissions drives the attention to the light alloys as in the case of Aluminium (Al) alloys, capable to combine low density with high values of the strength-to-weight ratio. Such advantages are partially counterbalanced by the poor formability at room temperature. A viable solution is to adopt a localized heat treatment by laser of the blank before the forming process to obtain a tailored distribution of material properties so that the blank can be formed at room temperature by means of conventional press machines. Such an approach has been extensively investigated for age hardenable alloys, but in the present work the attention is focused on the 5000 series; in particular, the optimization of the deep drawing process of the alloy AA5754 H32 is proposed through a numerical/experimental approach. A preliminary investigation was necessary to correctly tune the laser parameters (focus length, spot dimension) to effectively obtain the annealed state. Optimal process parameters were then obtained coupling a 2D FE model with an optimization platform managed by a multi-objective genetic algorithm. The optimal solution (i.e. able to maximize the LDR) in terms of blankholder force and extent of the annealed region was thus evaluated and validated through experimental trials. A good matching between experimental and numerical results was found. The optimal solution allowed to obtain an LDR of the locally heat treated blank larger than the one of the material either in the wrought condition (H32) either in the annealed condition (H111).
Towards an Optimal Noise Versus Resolution Trade-Off in Wind Scatterometry
NASA Technical Reports Server (NTRS)
Williams, Brent A.
2011-01-01
This paper approaches the noise versus resolution trade-off in wind scatterometry from a field-wise retrieval perspective. Theoretical considerations are discussed and practical implementation using a MAP estimator is applied to the Sea-Winds scatterometer. The approach is compared to conventional approaches as well as numerical weather predictions. The new approach incorporates knowledge of the wind spectrum to reduce the impact of components of the wind signal that are expected to be noisy.
Developmental framework to validate future designs of ballistic neck protection.
Breeze, J; Midwinter, M J; Pope, D; Porter, K; Hepper, A E; Clasper, J
2013-01-01
The number of neck injuries has increased during the war in Afghanistan, and they have become an appreciable source of mortality and long-term morbidity for UK servicemen. A three-dimensional numerical model of the neck is necessary to allow simulation of penetrating injury from explosive fragments so that the design of body armour can be optimal, and a framework is required to validate and describe the individual components of this program. An interdisciplinary consensus group consisting of military maxillofacial surgeons, and biomedical, physical, and material scientists was convened to generate the components of the framework, and as a result it incorporates the following components: analysis of deaths and long-term morbidity, assessment of critical cervical structures for incorporation into the model, characterisation of explosive fragments, evaluation of the material of which the body armour is made, and mapping of the entry sites of fragments. The resulting numerical model will simulate the wound tract produced by fragments of differing masses and velocities, and illustrate the effects of temporary cavities on cervical neurovascular structures. Using this framework, a new shirt to be worn under body armour that incorporates ballistic cervical protection has been developed for use in Afghanistan. New designs of the collar validated by human factors and assessment of coverage are currently being incorporated into early versions of the numerical model. The aim of this paper is to describe this developmental framework and provide an update on the current progress of its individual components. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rakotomanga, Prisca; Soussen, Charles; Blondel, Walter C. P. M.
2017-03-01
Diffuse reflectance spectroscopy (DRS) has been acknowledged as a valuable optical biopsy tool for in vivo characterizing pathological modifications in epithelial tissues such as cancer. In spatially resolved DRS, accurate and robust estimation of the optical parameters (OP) of biological tissues is a major challenge due to the complexity of the physical models. Solving this inverse problem requires to consider 3 components: the forward model, the cost function, and the optimization algorithm. This paper presents a comparative numerical study of the performances in estimating OP depending on the choice made for each of the latter components. Mono- and bi-layer tissue models are considered. Monowavelength (scalar) absorption and scattering coefficients are estimated. As a forward model, diffusion approximation analytical solutions with and without noise are implemented. Several cost functions are evaluated possibly including normalized data terms. Two local optimization methods, Levenberg-Marquardt and TrustRegion-Reflective, are considered. Because they may be sensitive to the initial setting, a global optimization approach is proposed to improve the estimation accuracy. This algorithm is based on repeated calls to the above-mentioned local methods, with initial parameters randomly sampled. Two global optimization methods, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), are also implemented. Estimation performances are evaluated in terms of relative errors between the ground truth and the estimated values for each set of unknown OP. The combination between the number of variables to be estimated, the nature of the forward model, the cost function to be minimized and the optimization method are discussed.
NASA Astrophysics Data System (ADS)
Welsch, Bastian; Rühaak, Wolfram; Schulte, Daniel O.; Formhals, Julian; Bär, Kristian; Sass, Ingo
2017-04-01
Large-scale borehole thermal energy storage (BTES) is a promising technology in the development of sustainable, renewable and low-emission district heating concepts. Such systems consist of several components and assemblies like the borehole heat exchangers (BHE), other heat sources (e.g. solarthermics, combined heat and power plants, peak load boilers, heat pumps), distribution networks and heating installations. The complexity of these systems necessitates numerical simulations in the design and planning phase. Generally, the subsurface components are simulated separately from the above ground components of the district heating system. However, as fluid and heat are exchanged, the subsystems interact with each other and thereby mutually affect their performances. For a proper design of the overall system, it is therefore imperative to take into account the interdependencies of the subsystems. Based on a TCP/IP communication we have developed an interface for the coupling of a simulation package for heating installations with a finite element software for the modeling of the heat flow in the subsurface and the underground installations. This allows for a co-simulation of all system components, whereby the interaction of the different subsystems is considered. Furthermore, the concept allows for a mathematical optimization of the components and the operational parameters. Consequently, a finer adjustment of the system can be ensured and a more precise prognosis of the system's performance can be realized.
Sandia National Laboratories analysis code data base
NASA Astrophysics Data System (ADS)
Peterson, C. W.
1994-11-01
Sandia National Laboratories' mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The laboratories' strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia's technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems, and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code 'ownership' and release status, and references describing the physical models and numerical implementation.
Dispersive effects on multicomponent transport through porous media
NASA Astrophysics Data System (ADS)
Dutta, Sourav; Daripa, Prabir
2017-11-01
We use a hybrid numerical method to solve a global pressure based porous media flow model of chemical enhanced oil recovery. This is an extension of our recent work. The numerical method is based on the use of a discontinuous finite element method and the modified method of characteristics. The impact of molecular diffusion and mechanical dispersion on the evolution of scalar concentration distributions are studied through numerical simulations of various flooding schemes. The relative importance of the advective, capillary diffusive and dispersive fluxes are compared over different flow regimes defined in the parameter space of Capillary number, Peclet number, longitudinal and transverse dispersion coefficients. Such studies are relevant for the design of effective injection policies and determining optimal combinations of chemical components for improving recovery. This work has been possible due to financial support from the U.S. National Science Foundation Grant DMS-1522782.
The design of photovoltaic plants - An optimization procedure
NASA Astrophysics Data System (ADS)
Bartoli, B.; Cuomo, V.; Fontana, F.; Serio, C.; Silvestrini, V.
An analytical model is developed to match the components and overall size of a solar power facility (comprising photovoltaic array), maximum-power tracker, battery storage system, and inverter) to the load requirements and climatic conditions of a proposed site at the smallest possible cost. Input parameters are the efficiencies and unit costs of the components, the load fraction to be covered (for stand-alone systems), the statistically analyzed meteorological data, and the cost and efficiency data of the support system (for fuel-generator-assisted plants). Numerical results are presented in graphs and tables for sites in Italy, and it is found that the explicit form of the model equation is independent of locality, at least for this region.
Engineering solutions for polymer composites solar water heaters production
NASA Astrophysics Data System (ADS)
Frid, S. E.; Arsatov, A. V.; Oshchepkov, M. Yu.
2016-06-01
Analysis of engineering solutions aimed at a considerable decrease of solar water heaters cost via the use of polymer composites in heaters construction and solar collector and heat storage integration into a single device representing an integrated unit results are considered. Possibilities of creating solar water heaters of only three components and changing welding, soldering, mechanical treatment, and assembly of a complicate construction for large components molding of polymer composites and their gluing are demonstrated. Materials of unit components and engineering solutions for their manufacturing are analyzed with consideration for construction requirements of solar water heaters. Optimal materials are fiber glass and carbon-filled plastics based on hot-cure thermosets, and an optimal molding technology is hot molding. It is necessary to manufacture the absorbing panel as corrugated and to use a special paint as its selective coating. Parameters of the unit have been optimized by calculation. Developed two-dimensional numerical model of the unit demonstrates good agreement with the experiment. Optimal ratio of daily load to receiving surface area of a solar water heater operating on a clear summer day in the midland of Russia is 130‒150 L/m2. Storage tank volume and load schedule have a slight effect on solar water heater output. A thermal insulation layer of 35‒40 mm is sufficient to provide an efficient thermal insulation of the back and side walls. An experimental model layout representing a solar water heater prototype of a prime cost of 70‒90/(m2 receiving surface) has been developed for a manufacturing volume of no less than 5000 pieces per year.
A Program to Improve the Triangulated Surface Mesh Quality Along Aircraft Component Intersections
NASA Technical Reports Server (NTRS)
Cliff, Susan E.
2005-01-01
A computer program has been developed for improving the quality of unstructured triangulated surface meshes in the vicinity of component intersections. The method relies solely on point removal and edge swapping for improving the triangulations. It can be applied to any lifting surface component such as a wing, canard or horizontal tail component intersected with a fuselage, or it can be applied to a pylon that is intersected with a wing, fuselage or nacelle. The lifting surfaces or pylon are assumed to be aligned in the axial direction with closed trailing edges. The method currently maintains salient edges only at leading and trailing edges of the wing or pylon component. This method should work well for any shape of fuselage that is free of salient edges at the intersection. The method has been successfully demonstrated on a total of 125 different test cases that include both blunt and sharp wing leading edges. The code is targeted for use in the automated environment of numerical optimization where geometric perturbations to individual components can be critical to the aerodynamic performance of a vehicle. Histograms of triangle aspect ratios are reported to assess the quality of the triangles attached to the intersection curves before and after application of the program. Large improvements to the quality of the triangulations were obtained for the 125 test cases; the quality was sufficient for use with an automated tetrahedral mesh generation program that is used as part of an aerodynamic shape optimization method.
A High Order Discontinuous Galerkin Method for 2D Incompressible Flows
NASA Technical Reports Server (NTRS)
Liu, Jia-Guo; Shu, Chi-Wang
1999-01-01
In this paper we introduce a high order discontinuous Galerkin method for two dimensional incompressible flow in vorticity streamfunction formulation. The momentum equation is treated explicitly, utilizing the efficiency of the discontinuous Galerkin method The streamfunction is obtained by a standard Poisson solver using continuous finite elements. There is a natural matching between these two finite element spaces, since the normal component of the velocity field is continuous across element boundaries. This allows for a correct upwinding gluing in the discontinuous Galerkin framework, while still maintaining total energy conservation with no numerical dissipation and total enstrophy stability The method is suitable for inviscid or high Reynolds number flows. Optimal error estimates are proven and verified by numerical experiments.
NASA Astrophysics Data System (ADS)
Ozbulut, Osman E.; Hurlebaus, Stefan
2011-11-01
This paper proposes a re-centering variable friction device (RVFD) for control of civil structures subjected to near-field earthquakes. The proposed hybrid device has two sub-components. The first sub-component of this hybrid device consists of shape memory alloy (SMA) wires that exhibit a unique hysteretic behavior and full recovery following post-transformation deformations. The second sub-component of the hybrid device consists of variable friction damper (VFD) that can be intelligently controlled for adaptive semi-active behavior via modulation of its voltage level. In general, installed SMA devices have the ability to re-center structures at the end of the motion and VFDs can increase the energy dissipation capacity of structures. The full realization of these devices into a singular, hybrid form which complements the performance of each device is investigated in this study. A neuro-fuzzy model is used to capture rate- and temperature-dependent nonlinear behavior of the SMA components of the hybrid device. An optimal fuzzy logic controller (FLC) is developed to modulate voltage level of VFDs for favorable performance in a RVFD hybrid application. To obtain optimal controllers for concurrent mitigation of displacement and acceleration responses, tuning of governing fuzzy rules is conducted by a multi-objective heuristic optimization. Then, numerical simulation of a multi-story building is conducted to evaluate the performance of the hybrid device. Results show that a re-centering variable friction device modulated with a fuzzy logic control strategy can effectively reduce structural deformations without increasing acceleration response during near-field earthquakes.
Comparison of weighting techniques for acoustic full waveform inversion
NASA Astrophysics Data System (ADS)
Jeong, Gangwon; Hwang, Jongha; Min, Dong-Joo
2017-12-01
To reconstruct long-wavelength structures in full waveform inversion (FWI), the wavefield-damping and weighting techniques have been used to synthesize and emphasize low-frequency data components in frequency-domain FWI. However, these methods have some weak points. The application of wavefield-damping method on filtered data fails to synthesize reliable low-frequency data; the optimization formula obtained introducing the weighting technique is not theoretically complete, because it is not directly derived from the objective function. In this study, we address these weak points and present how to overcome them. We demonstrate that the source estimation in FWI using damped wavefields fails when the data used in the FWI process does not satisfy the causality condition. This phenomenon occurs when a non-causal filter is applied to data. We overcome this limitation by designing a causal filter. Also we modify the conventional weighting technique so that its optimization formula is directly derived from the objective function, retaining its original characteristic of emphasizing the low-frequency data components. Numerical results show that the newly designed causal filter enables to recover long-wavelength structures using low-frequency data components synthesized by damping wavefields in frequency-domain FWI, and the proposed weighting technique enhances the inversion results.
Parametric amplification in quasi-PT symmetric coupled waveguide structures
NASA Astrophysics Data System (ADS)
Zhong, Q.; Ahmed, A.; Dadap, J. I.; Osgood, R. M., Jr.; El-Ganainy, R.
2016-12-01
The concept of non-Hermitian parametric amplification was recently proposed as a means to achieve an efficient energy conversion throughout the process of nonlinear three wave mixing in the absence of phase matching. Here we investigate this effect in a waveguide coupler arrangement whose characteristics are tailored to introduce passive PT symmetry only for the idler component. By means of analytical solutions and numerical analysis, we demonstrate the utility of these novel schemes and obtain the optimal design conditions for these devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobson, Ian; Hiskens, Ian; Linderoth, Jeffrey
Building on models of electrical power systems, and on powerful mathematical techniques including optimization, model predictive control, and simluation, this project investigated important issues related to the stable operation of power grids. A topic of particular focus was cascading failures of the power grid: simulation, quantification, mitigation, and control. We also analyzed the vulnerability of networks to component failures, and the design of networks that are responsive to and robust to such failures. Numerous other related topics were investigated, including energy hubs and cascading stall of induction machines
NASA Astrophysics Data System (ADS)
Temelkov, K. A.; Slaveeva, S. I.; Fedchenko, Yu I.; Chernogorova, T. P.
2018-03-01
Using the well-known Wassiljewa equation and a new simple method, the thermal conductivities of various 2- and 3-component gas mixtures were calculated and compared under gas-discharge conditions optimal for two prospective lasers excited in a nanosecond pulsed longitudinal discharge. By solving the non-stationary heat-conduction equation for electrons, a 2D numerical model was also developed for determination of the radial and temporal dependences of the electron temperature Te (r, t).
Spin-Multiplet Components and Energy Splittings by Multistate Density Functional Theory.
Grofe, Adam; Chen, Xin; Liu, Wenjian; Gao, Jiali
2017-10-05
Kohn-Sham density functional theory has been tremendously successful in chemistry and physics. Yet, it is unable to describe the energy degeneracy of spin-multiplet components with any approximate functional. This work features two contributions. (1) We present a multistate density functional theory (MSDFT) to represent spin-multiplet components and to determine multiplet energies. MSDFT is a hybrid approach, taking advantage of both wave function theory and density functional theory. Thus, the wave functions, electron densities and energy density-functionals for ground and excited states and for different components are treated on the same footing. The method is illustrated on valence excitations of atoms and molecules. (2) Importantly, a key result is that for cases in which the high-spin components can be determined separately by Kohn-Sham density functional theory, the transition density functional in MSDFT (which describes electronic coupling) can be defined rigorously. The numerical results may be explored to design and optimize transition density functionals for configuration coupling in multiconfigurational DFT.
Structural Integration of Sensors/Actuators by Laser Beam Melting for Tailored Smart Components
NASA Astrophysics Data System (ADS)
Töppel, Thomas; Lausch, Holger; Brand, Michael; Hensel, Eric; Arnold, Michael; Rotsch, Christian
2018-03-01
Laser beam melting (LBM), an additive laser powder bed fusion technology, enables the structural integration of temperature-sensitive sensors and actuators in complex monolithic metallic structures. The objective is to embed a functional component inside a metal part without losing its functionality by overheating. The first part of this paper addresses the development of a new process chain for bonded embedding of temperature-sensitive sensor/actuator systems by LBM. These systems are modularly built and coated by a multi-material/multi-layer thermal protection system of ceramic and metallic compounds. The characteristic of low global heat input in LBM is utilized for the functional embedding. In the second part, the specific functional design and optimization for tailored smart components with embedded functionalities are addressed. Numerical and experimental validated results are demonstrated on a smart femoral hip stem.
Hogiri, Tomoharu; Tamashima, Hiroshi; Nishizawa, Akitoshi; Okamoto, Masahiro
2018-02-01
To optimize monoclonal antibody (mAb) production in Chinese hamster ovary cell cultures, culture pH should be temporally controlled with high resolution. In this study, we propose a new pH-dependent dynamic model represented by simultaneous differential equations including a minimum of six system component, depending on pH value. All kinetic parameters in the dynamic model were estimated using an evolutionary numerical optimization (real-coded genetic algorithm) method based on experimental time-course data obtained at different pH values ranging from 6.6 to 7.2. We determined an optimal pH-shift schedule theoretically. We validated this optimal pH-shift schedule experimentally and mAb production increased by approximately 40% with this schedule. Throughout this study, it was suggested that the culture pH-shift optimization strategy using a pH-dependent dynamic model is suitable to optimize any pH-shift schedule for CHO cell lines used in mAb production projects. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems
Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.
Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.
Aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Murman, E. M.; Chapman, G. T.
1983-01-01
The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.
Genetically Engineered Microelectronic Infrared Filters
NASA Technical Reports Server (NTRS)
Cwik, Tom; Klimeck, Gerhard
1998-01-01
A genetic algorithm is used for design of infrared filters and in the understanding of the material structure of a resonant tunneling diode. These two components are examples of microdevices and nanodevices that can be numerically simulated using fundamental mathematical and physical models. Because the number of parameters that can be used in the design of one of these devices is large, and because experimental exploration of the design space is unfeasible, reliable software models integrated with global optimization methods are examined The genetic algorithm and engineering design codes have been implemented on massively parallel computers to exploit their high performance. Design results are presented for the infrared filter showing new and optimized device design. Results for nanodevices are presented in a companion paper at this workshop.
NASA Technical Reports Server (NTRS)
Acikmese, Behcet A.; Carson, John M., III
2005-01-01
A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees the resolvability of the associated finite-horizon optimal control problem in a receding-horizon implementation. The control consists of two components; (i) feedforward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives, and derivatives in polytopes. An illustrative numerical example is also provided.
Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network.
Goto, Hayato
2016-02-22
The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.
NASA Technical Reports Server (NTRS)
Acikmese, Ahmet Behcet; Carson, John M., III
2006-01-01
A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees resolvability. With resolvability, initial feasibility of the finite-horizon optimal control problem implies future feasibility in a receding-horizon framework. The control consists of two components; (i) feed-forward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives and derivatives in polytopes. An illustrative numerical example is also provided.
Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network
NASA Astrophysics Data System (ADS)
Goto, Hayato
2016-02-01
The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.
D'Elia, Marta; Perego, Mauro; Bochev, Pavel B.; ...
2015-12-21
We develop and analyze an optimization-based method for the coupling of nonlocal and local diffusion problems with mixed volume constraints and boundary conditions. The approach formulates the coupling as a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the nonlocal and local domains, and the controls are virtual volume constraints and boundary conditions. When some assumptions on the kernel functions hold, we prove that the resulting optimization problem is well-posed and discuss its implementation using Sandia’s agile software components toolkit. As a result,more » the latter provides the groundwork for the development of engineering analysis tools, while numerical results for nonlocal diffusion in three-dimensions illustrate key properties of the optimization-based coupling method.« less
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
Analysis of Photothermal Characterization of Layered Materials: Design of Optimal Experiments
NASA Technical Reports Server (NTRS)
Cole, Kevin D.
2003-01-01
In this paper numerical calculations are presented for the steady-periodic temperature in layered materials and functionally-graded materials to simulate photothermal methods for the measurement of thermal properties. No laboratory experiments were performed. The temperature is found from a new Green s function formulation which is particularly well-suited to machine calculation. The simulation method is verified by comparison with literature data for a layered material. The method is applied to a class of two-component functionally-graded materials and results for temperature and sensitivity coefficients are presented. An optimality criterion, based on the sensitivity coefficients, is used for choosing what experimental conditions will be needed for photothermal measurements to determine the spatial distribution of thermal properties. This method for optimal experiment design is completely general and may be applied to any photothermal technique and to any functionally-graded material.
Optimized mode-field adapter for low-loss fused fiber bundle signal and pump combiners
NASA Astrophysics Data System (ADS)
Koška, Pavel; Baravets, Yauhen; Peterka, Pavel; Písařík, Michael; Bohata, Jan
2015-03-01
In our contribution we report novel mode field adapter incorporated inside bundled tapered pump and signal combiner. Pump and signal combiners are crucial component of contemporary double clad high power fiber lasers. Proposed combiner allows simultaneous matching to single mode core on input and output. We used advanced optimization techniques to match the combiner to a single mode core simultaneously on input and output and to minimalize losses of the combiner signal branch. We designed two arrangements of combiners' mode field adapters. Our numerical simulations estimates losses in signal branches of optimized combiners of 0.23 dB for the first design and 0.16 dB for the second design for SMF-28 input fiber and SMF-28 matched output double clad fiber for the wavelength of 2000 nm. The splice losses of the actual combiner are expected to be even lower thanks to dopant diffusion during the splicing process.
Information-theoretic approach to interactive learning
NASA Astrophysics Data System (ADS)
Still, S.
2009-01-01
The principles of statistical mechanics and information theory play an important role in learning and have inspired both theory and the design of numerous machine learning algorithms. The new aspect in this paper is a focus on integrating feedback from the learner. A quantitative approach to interactive learning and adaptive behavior is proposed, integrating model- and decision-making into one theoretical framework. This paper follows simple principles by requiring that the observer's world model and action policy should result in maximal predictive power at minimal complexity. Classes of optimal action policies and of optimal models are derived from an objective function that reflects this trade-off between prediction and complexity. The resulting optimal models then summarize, at different levels of abstraction, the process's causal organization in the presence of the learner's actions. A fundamental consequence of the proposed principle is that the learner's optimal action policies balance exploration and control as an emerging property. Interestingly, the explorative component is present in the absence of policy randomness, i.e. in the optimal deterministic behavior. This is a direct result of requiring maximal predictive power in the presence of feedback.
Multi-strategy coevolving aging particle optimization.
Iacca, Giovanni; Caraffini, Fabio; Neri, Ferrante
2014-02-01
We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer.
Design, realization and structural testing of a compliant adaptable wing
NASA Astrophysics Data System (ADS)
Molinari, G.; Quack, M.; Arrieta, A. F.; Morari, M.; Ermanni, P.
2015-10-01
This paper presents the design, optimization, realization and testing of a novel wing morphing concept, based on distributed compliance structures, and actuated by piezoelectric elements. The adaptive wing features ribs with a selectively compliant inner structure, numerically optimized to achieve aerodynamically efficient shape changes while simultaneously withstanding aeroelastic loads. The static and dynamic aeroelastic behavior of the wing, and the effect of activating the actuators, is assessed by means of coupled 3D aerodynamic and structural simulations. To demonstrate the capabilities of the proposed morphing concept and optimization procedure, the wings of a model airplane are designed and manufactured according to the presented approach. The goal is to replace conventional ailerons, thus to achieve controllability in roll purely by morphing. The mechanical properties of the manufactured components are characterized experimentally, and used to create a refined and correlated finite element model. The overall stiffness, strength, and actuation capabilities are experimentally tested and successfully compared with the numerical prediction. To counteract the nonlinear hysteretic behavior of the piezoelectric actuators, a closed-loop controller is implemented, and its capability of accurately achieving the desired shape adaptation is evaluated experimentally. Using the correlated finite element model, the aeroelastic behavior of the manufactured wing is simulated, showing that the morphing concept can provide sufficient roll authority to allow controllability of the flight. The additional degrees of freedom offered by morphing can be also used to vary the plane lift coefficient, similarly to conventional flaps. The efficiency improvements offered by this technique are evaluated numerically, and compared to the performance of a rigid wing.
Wavelet-bounded empirical mode decomposition for measured time series analysis
NASA Astrophysics Data System (ADS)
Moore, Keegan J.; Kurt, Mehmet; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.
2018-01-01
Empirical mode decomposition (EMD) is a powerful technique for separating the transient responses of nonlinear and nonstationary systems into finite sets of nearly orthogonal components, called intrinsic mode functions (IMFs), which represent the dynamics on different characteristic time scales. However, a deficiency of EMD is the mixing of two or more components in a single IMF, which can drastically affect the physical meaning of the empirical decomposition results. In this paper, we present a new approached based on EMD, designated as wavelet-bounded empirical mode decomposition (WBEMD), which is a closed-loop, optimization-based solution to the problem of mode mixing. The optimization routine relies on maximizing the isolation of an IMF around a characteristic frequency. This isolation is measured by fitting a bounding function around the IMF in the frequency domain and computing the area under this function. It follows that a large (small) area corresponds to a poorly (well) separated IMF. An optimization routine is developed based on this result with the objective of minimizing the bounding-function area and with the masking signal parameters serving as free parameters, such that a well-separated IMF is extracted. As examples of application of WBEMD we apply the proposed method, first to a stationary, two-component signal, and then to the numerically simulated response of a cantilever beam with an essentially nonlinear end attachment. We find that WBEMD vastly improves upon EMD and that the extracted sets of IMFs provide insight into the underlying physics of the response of each system.
Dynamic simulation of a reverse Brayton refrigerator
NASA Astrophysics Data System (ADS)
Peng, N.; Lei, L. L.; Xiong, L. Y.; Tang, J. C.; Dong, B.; Liu, L. Q.
2014-01-01
A test refrigerator based on the modified Reverse Brayton cycle has been developed in the Chinese Academy of Sciences recently. To study the behaviors of this test refrigerator, a dynamic simulation has been carried out. The numerical model comprises the typical components of the test refrigerator: compressor, valves, heat exchangers, expander and heater. This simulator is based on the oriented-object approach and each component is represented by a set of differential and algebraic equations. The control system of the test refrigerator is also simulated, which can be used to optimize the control strategies. This paper describes all the models and shows the simulation results. Comparisons between simulation results and experimental data are also presented. Experimental validation on the test refrigerator gives satisfactory results.
Smart manufacturing of complex shaped pipe components
NASA Astrophysics Data System (ADS)
Salchak, Y. A.; Kotelnikov, A. A.; Sednev, D. A.; Borikov, V. N.
2018-03-01
Manufacturing industry is constantly improving. Nowadays the most relevant trend is widespread automation and optimization of the production process. This paper represents a novel approach for smart manufacturing of steel pipe valves. The system includes two main parts: mechanical treatment and quality assurance units. Mechanical treatment is performed by application of the milling machine with implementation of computerized numerical control, whilst the quality assurance unit contains three testing modules for different tasks, such as X-ray testing, optical scanning and ultrasound testing modules. The advances of each of them provide reliable results that contain information about any failures of the technological process, any deviations of geometrical parameters of the valves. The system also allows detecting defects on the surface or in the inner structure of the component.
Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard
2002-01-01
The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.
Genetic algorithm in the structural design of Cooke triplet lenses
NASA Astrophysics Data System (ADS)
Hazra, Lakshminarayan; Banerjee, Saswatee
1999-08-01
This paper is in tune with our efforts to develop a systematic method for multicomponent lens design. Our aim is to find a suitable starting point in the final configuration space, so that popular local search methods like damped least squares (DLS) may directly lead to a useful solution. For 'ab initio' design problems, a thin lens layout specifying the powers of the individual components and the intercomponent separations are worked out analytically. Requirements of central aberration targets for the individual components in order to satisfy the prespecified primary aberration targets for the overall system are then determined by nonlinear optimization. The next step involves structural design of the individual components by optimization techniques. This general method may be adapted for the design of triplets and their derivatives. However, for the thin lens design of a Cooke triplet composed of three airspaced singlets, the two steps of optimization mentioned above may be combined into a single optimization procedure. The optimum configuration for each of the single set, catering to the required Gaussian specification and primary aberration targets for the Cooke triplet, are determined by an application of genetic algorithm (GA). Our implementation of this algorithm is based on simulations of some complex tools of natural evolution, like selection, crossover and mutation. Our version of GA may or may not converge to a unique optimum, depending on some of the algorithm specific parameter values. With our algorithm, practically useful solutions are always available, although convergence to a global optimum can not be guaranteed. This is perfectly in keeping with our need to allow 'floating' of aberration targets in the subproblem level. Some numerical results dealing with our preliminary investigations on this problem are presented.
NASA Astrophysics Data System (ADS)
Denis, Vincent
2008-09-01
This paper presents a statistical method for determining the dimensions, tolerance and specifications of components for the Laser MegaJoule (LMJ). Numerous constraints inherent to a large facility require specific tolerances: the huge number of optical components; the interdependence of these components between the beams of same bundle; angular multiplexing for the amplifier section; distinct operating modes between the alignment and firing phases; the definition and use of alignment software in the place of classic optimization. This method provides greater flexibility to determine the positioning and manufacturing specifications of the optical components. Given the enormous power of the Laser MegaJoule (over 18 kJ in the infrared and 9 kJ in the ultraviolet), one of the major risks is damage the optical mounts and pollution of the installation by mechanical ablation. This method enables estimation of the beam occultation probabilities and quantification of the risks for the facility. All the simulations were run using the ZEMAX-EE optical design software.
An extended continuum model considering optimal velocity change with memory and numerical tests
NASA Astrophysics Data System (ADS)
Qingtao, Zhai; Hongxia, Ge; Rongjun, Cheng
2018-01-01
In this paper, an extended continuum model of traffic flow is proposed with the consideration of optimal velocity changes with memory. The new model's stability condition and KdV-Burgers equation considering the optimal velocities change with memory are deduced through linear stability theory and nonlinear analysis, respectively. Numerical simulation is carried out to study the extended continuum model, which explores how optimal velocity changes with memory affected velocity, density and energy consumption. Numerical results show that when considering the effects of optimal velocity changes with memory, the traffic jams can be suppressed efficiently. Both the memory step and sensitivity parameters of optimal velocity changes with memory will enhance the stability of traffic flow efficiently. Furthermore, numerical results demonstrates that the effect of optimal velocity changes with memory can avoid the disadvantage of historical information, which increases the stability of traffic flow on road, and so it improve the traffic flow stability and minimize cars' energy consumptions.
NASA Astrophysics Data System (ADS)
Tamura, Yoshinobu; Yamada, Shigeru
OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.
NASA Astrophysics Data System (ADS)
Prawin, J.; Rama Mohan Rao, A.
2018-01-01
The knowledge of dynamic loads acting on a structure is always required for many practical engineering problems, such as structural strength analysis, health monitoring and fault diagnosis, and vibration isolation. In this paper, we present an online input force time history reconstruction algorithm using Dynamic Principal Component Analysis (DPCA) from the acceleration time history response measurements using moving windows. We also present an optimal sensor placement algorithm to place limited sensors at dynamically sensitive spatial locations. The major advantage of the proposed input force identification algorithm is that it does not require finite element idealization of structure unlike the earlier formulations and therefore free from physical modelling errors. We have considered three numerical examples to validate the accuracy of the proposed DPCA based method. Effects of measurement noise, multiple force identification, different kinds of loading, incomplete measurements, and high noise levels are investigated in detail. Parametric studies have been carried out to arrive at optimal window size and also the percentage of window overlap. Studies presented in this paper clearly establish the merits of the proposed algorithm for online load identification.
NASA Astrophysics Data System (ADS)
Kaabi, Abderrahmen; Bienvenu, Yves; Ryckelynck, David; Pierre, Bertrand
2014-03-01
Power electronics modules (>100 A, >500 V) are essential components for the development of electrical and hybrid vehicles. These modules are formed from silicon chips (transistors and diodes) assembled on copper substrates by soldering. Owing to the fact that the assembly is heterogeneous, and because of thermal gradients, shear stresses are generated in the solders and cause premature damage to such electronics modules. This work focuses on architectured materials for the substrate and on lead-free solders to reduce the mechanical effects of differential expansion, improve the reliability of the assembly, and achieve a suitable operating temperature (<175°C). These materials are composites whose thermomechanical properties have been optimized by numerical simulation and validated experimentally. The substrates have good thermal conductivity (>280 W m-1 K-1) and a macroscopic coefficient of thermal expansion intermediate between those of Cu and Si, as well as limited structural evolution in service conditions. An approach combining design, optimization, and manufacturing of new materials has been followed in this study, leading to improved thermal cycling behavior of the component.
A framework for optimizing micro-CT in dual-modality micro-CT/XFCT small-animal imaging system
NASA Astrophysics Data System (ADS)
Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew; Cho, Sang Hyun
2017-09-01
Dual-modality Computed Tomography (CT)/X-ray Fluorescence Computed Tomography (XFCT) can be a valuable tool for imaging and quantifying the organ and tissue distribution of small concentrations of high atomic number materials in small-animal system. In this work, the framework for optimizing the micro-CT imaging system component of the dual-modality system is described, either when the micro-CT images are concurrently acquired with XFCT and using the x-ray spectral conditions for XFCT, or when the micro-CT images are acquired sequentially and independently of XFCT. This framework utilizes the cascaded systems analysis for task-specific determination of the detectability index using numerical observer models at a given radiation dose, where the radiation dose is determined using Monte Carlo simulations.
An Initial Multi-Domain Modeling of an Actively Cooled Structure
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur
1997-01-01
A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.
NASA Astrophysics Data System (ADS)
Huq, Sadiq; De Roo, Frederik; Foken, Thomas; Mauder, Matthias
2017-10-01
The Campbell CSAT3 sonic anemometer is one of the most popular instruments for turbulence measurements in basic micrometeorological research and ecological applications. While measurement uncertainty has been characterized by field experiments and wind-tunnel studies in the past, there are conflicting estimates, which motivated us to conduct a numerical experiment using large-eddy simulation to evaluate the probe-induced flow distortion of the CSAT3 anemometer under controlled conditions, and with exact knowledge of the undisturbed flow. As opposed to wind-tunnel studies, we imposed oscillations in both the vertical and horizontal velocity components at the distinct frequencies and amplitudes found in typical turbulence spectra in the surface layer. The resulting flow-distortion errors for the standard deviations of the vertical velocity component range from 3 to 7%, and from 1 to 3% for the horizontal velocity component, depending on the azimuth angle. The magnitude of these errors is almost independent of the frequency of wind speed fluctuations, provided the amplitude is typical for surface-layer turbulence. A comparison of the corrections for transducer shadowing proposed by both Kaimal et al. (Proc Dyn Flow Conf, 551-565, 1978) and Horst et al. (Boundary-Layer Meteorol 155:371-395, 2015) show that both methods compensate for a larger part of the observed error, but do not sufficiently account for the azimuth dependency. Further numerical simulations could be conducted in the future to characterize the flow distortion induced by other existing types of sonic anemometers for the purposes of optimizing their geometry.
NASA Astrophysics Data System (ADS)
Ren, Zhengyong; Qiu, Lewen; Tang, Jingtian; Wu, Xiaoping; Xiao, Xiao; Zhou, Zilong
2018-01-01
Although accurate numerical solvers for 3-D direct current (DC) isotropic resistivity models are current available even for complicated models with topography, reliable numerical solvers for the anisotropic case are still an open question. This study aims to develop a novel and optimal numerical solver for accurately calculating the DC potentials for complicated models with arbitrary anisotropic conductivity structures in the Earth. First, a secondary potential boundary value problem is derived by considering the topography and the anisotropic conductivity. Then, two a posteriori error estimators with one using the gradient-recovery technique and one measuring the discontinuity of the normal component of current density are developed for the anisotropic cases. Combing the goal-oriented and non-goal-oriented mesh refinements and these two error estimators, four different solving strategies are developed for complicated DC anisotropic forward modelling problems. A synthetic anisotropic two-layer model with analytic solutions verified the accuracy of our algorithms. A half-space model with a buried anisotropic cube and a mountain-valley model are adopted to test the convergence rates of these four solving strategies. We found that the error estimator based on the discontinuity of current density shows better performance than the gradient-recovery based a posteriori error estimator for anisotropic models with conductivity contrasts. Both error estimators working together with goal-oriented concepts can offer optimal mesh density distributions and highly accurate solutions.
NASA Astrophysics Data System (ADS)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.
Numerical Simulation of Thermal Response and Ablation Behavior of a Hybrid Carbon/Carbon Composite
NASA Astrophysics Data System (ADS)
Zhang, Bai; Li, Xudong
2017-09-01
The thermal response and ablation behavior of a hybrid carbon/carbon (C/C) composite are studied herein by using a numerical model. This model is based on the energy- and mass-conservation principles as well as on the calculation of the thermophysical properties of materials. The thermal response and ablation behavior are simulated from the perspective of the matrix and fiber components of a hybrid C/C composite. The thermophysical properties during ablation are calculated, and a moving boundary is implemented to consider the recession of the ablation surface. The temperature distribution, thermophysical properties, char layer thickness, linear ablation rate, mass flow rate of the pyrolysis gases, and mass loss of the hybrid C/C composite are quantitatively predicted. This numerical study describing the thermal response and ablation behavior provides a fundamental understanding of the ablative mechanism of a hybrid C/C composite, serving as a reference and basis for further designs and optimizations of thermoprotective materials.
Numerical Simulation of Thermal Response and Ablation Behavior of a Hybrid Carbon/Carbon Composite
NASA Astrophysics Data System (ADS)
Zhang, Bai; Li, Xudong
2018-06-01
The thermal response and ablation behavior of a hybrid carbon/carbon (C/C) composite are studied herein by using a numerical model. This model is based on the energy- and mass-conservation principles as well as on the calculation of the thermophysical properties of materials. The thermal response and ablation behavior are simulated from the perspective of the matrix and fiber components of a hybrid C/C composite. The thermophysical properties during ablation are calculated, and a moving boundary is implemented to consider the recession of the ablation surface. The temperature distribution, thermophysical properties, char layer thickness, linear ablation rate, mass flow rate of the pyrolysis gases, and mass loss of the hybrid C/C composite are quantitatively predicted. This numerical study describing the thermal response and ablation behavior provides a fundamental understanding of the ablative mechanism of a hybrid C/C composite, serving as a reference and basis for further designs and optimizations of thermoprotective materials.
Minimal entropy probability paths between genome families.
Ahlbrandt, Calvin; Benson, Gary; Casey, William
2004-05-01
We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.
2D Inviscid and Viscous Inverse Design Using Continuous Adjoint and Lax-Wendroff Formulation
NASA Astrophysics Data System (ADS)
Proctor, Camron Lisle
The continuous adjoint (CA) technique for optimization and/or inverse-design of aerodynamic components has seen nearly 30 years of documented success in academia. The benefits of using CA versus a direct sensitivity analysis are shown repeatedly in the literature. However, the use of CA in industry is relatively unheard-of. The sparseness of industry contributions to the field may be attributed to the tediousness of the derivation and/or to the difficulties in implementation due to the lack of well-documented adjoint numerical methods. The focus of this work has been to thoroughly document the techniques required to build a two-dimensional CA inverse-design tool. To this end, this work begins with a short background on computational fluid dynamics (CFD) and the use of optimization tools in conjunction with CFD tools to solve aerodynamic optimization problems. A thorough derivation of the continuous adjoint equations and the accompanying gradient calculations for inviscid and viscous constraining equations follows the introduction. Next, the numerical techniques used for solving the partial differential equations (PDEs) governing the flow equations and the adjoint equations are described. Numerical techniques for the supplementary equations are discussed briefly. Subsequently, a verification of the efficacy of the inverse design tool, for the inviscid adjoint equations as well as possible numerical implementation pitfalls are discussed. The NACA0012 airfoil is used as an initial airfoil and surface pressure distribution and the NACA16009 is used as the desired pressure and vice versa. Using a Savitsky-Golay gradient filter, convergence (defined as a cost function<1E-5) is reached in approximately 220 design iteration using 121 design variables. The inverse-design using inviscid adjoint equations results are followed by the discussion of the viscous inverse design results and techniques used to further the convergence of the optimizer. The relationship between limiting step-size and convergence in a line-search optimization is shown to slightly decrease the final cost function at significant computational cost. A gradient damping technique is presented and shown to increase the convergence rate for the optimization in viscous problems, at a negligible increase in computational cost, but is insufficient to converge the solution. Systematically including adjacent surface vertices in the perturbation of a design variable, also a surface vertex, is shown to affect the convergence capability of the viscous optimizer. Finally, a comparison of using inviscid adjoint equations, as opposed to viscous adjoint equations, on viscous flow is presented, and the inviscid adjoint paired with viscous flow is found to reduce the cost function further than the viscous adjoint for the presented problem.
Optimal maintenance of a multi-unit system under dependencies
NASA Astrophysics Data System (ADS)
Sung, Ho-Joon
The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies. Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model. This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the same end-goals as Reliability Centered Maintenance (RCM). RCM was first introduced to the aircraft industry in an attempt to bridge the gap between the empirically-driven and theory-driven approaches to establishing optimal maintenance policies. Under RCM, qualitative processes that enable the prioritizing of functions based on the criticality and influence would be combined with mathematical modeling to obtain the optimal maintenance policies. Where this thesis work deviates from RCM is its proposal to directly apply quantitative processes to model the reliability measures in optimal maintenance problem. First, Monte Carlo (MC) simulation, in conjunction with a pre-determined Design of Experiments (DOE) table, can be used as a numerical means of obtaining the corresponding discrete simulated outcomes of the reliability measures based on the combination of decision variables (e.g., periodic preventive maintenance interval, trigger age for opportunistic maintenance, etc.). These discrete simulation results can then be regressed as Response Surface Equations (RSEs) with respect to the decision variables. Such an approach to represent the reliability measures with continuous surrogate functions (i.e., the RSEs) not only enables the application of the numerical optimization technique to solve for optimal maintenance policies, but also obviates the need to make mathematical assumptions or impose over-simplifications on the structure of a multi-unit system for the sake of mathematical tractability. The applicability of the proposed methodology to a real-world optimal maintenance problem is showcased through its application to a Time Limited Dispatch (TLD) of Full Authority Digital Engine Control (FADEC) system. In broader terms, this proof-of-concept exercise can be described as a constrained optimization problem, whose objective is to identify the optimal system inspection interval that guarantees a certain level of availability for a multi-unit system. A variety of reputable numerical techniques were used to model the problem as accurately as possible, including algorithms for the MC simulation, imperfect maintenance model from quasi renewal processes, repair time simulation, and state transition rules. Variance Reduction Techniques (VRTs) were also used in an effort to enhance MC simulation efficiency. After accurate MC simulation results are obtained, the RSEs are generated based on the goodness-of-fit measure to yield as parsimonious model as possible to construct the optimization problem. Under the assumption of constant failure rate for lifetime distributions, the inspection interval from the proposed methodology was found to be consistent with the one from the common approach used in industry that leverages Continuous Time Markov Chain (CTMC). While the latter does not consider maintenance cost settings, the proposed methodology enables an operator to consider different types of maintenance cost settings, e.g., inspection cost, system corrective maintenance cost, etc., to result in more flexible maintenance policies. When the proposed methodology was applied to the same TLD of FADEC example, but under the more generalized assumption of strictly Increasing Failure Rate (IFR) for lifetime distribution, it was shown to successfully capture component wear-out, as well as the economic dependencies among the system components.
Numerical modeling and optimization of the Iguassu gas centrifuge
NASA Astrophysics Data System (ADS)
Bogovalov, S. V.; Borman, V. D.; Borisevich, V. D.; Tronin, V. N.; Tronin, I. V.
2017-07-01
The full procedure of the numerical calculation of the optimized parameters of the Iguassu gas centrifuge (GC) is under discussion. The procedure consists of a few steps. On the first step the problem of a hydrodynamical flow of the gas in the rotating rotor of the GC is solved numerically. On the second step the problem of diffusion of the binary mixture of isotopes is solved. The separation power of the gas centrifuge is calculated after that. On the last step the time consuming procedure of optimization of the GC is performed providing us the maximum of the separation power. The optimization is based on the BOBYQA method exploring the results of numerical simulations of the hydrodynamics and diffusion of the mixture of isotopes. Fast convergence of calculations is achieved due to exploring of a direct solver at the solution of the hydrodynamical and diffusion parts of the problem. Optimized separative power and optimal internal parameters of the Iguassu GC with 1 m rotor were calculated using the developed approach. Optimization procedure converges in 45 iterations taking 811 minutes.
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.; Jacobsen, S. E.
1986-01-01
An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.
NASA Astrophysics Data System (ADS)
D'Amours, Guillaume; Rahem, Ahmed; Williams, Bruce; Worswick, Michael; Mayer, Robert
2007-05-01
The automotive industry, with an increasing demand to reduce vehicle weight through the adoption of lightweight materials, requires a search of efficient methods that suit these materials. One attractive concept is to use hydroforming of aluminium tubes. By using FE simulations, the process can be optimized to reduce the risk for failure while maintaining energy absorption and component integrity under crash conditions. It is important to capture the level of residual ductility after forming to allow proper design for crashworthiness. This paper presents numerical and experimental studies that have been carried out for high pressure hydroforming operations to study the influence of the tube corner radius, end feeding, material thinning, and work hardening in 76.2 mm diameter, 3 mm wall thickness AA5754 aluminium alloy tube. End feeding was used to increase the formability of the tubes. The influence of the end feed displacement versus tube forming pressure schedule was studied to optimize the forming process operation to reduce thinning. Validation of the numerical simulations was performed by comparison of the predicted strain distributions and thinning, with measured quantities. The effect of element formulation (thin shell versus solid elements) was also considered in the models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schanen, Michel; Marin, Oana; Zhang, Hong
Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less
Numerical optimization methods for controlled systems with parameters
NASA Astrophysics Data System (ADS)
Tyatyushkin, A. I.
2017-10-01
First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.
Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network
Goto, Hayato
2016-01-01
The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence. PMID:26899997
Using "big data" to optimally model hydrology and water quality across expansive regions
Roehl, E.A.; Cook, J.B.; Conrads, P.A.
2009-01-01
This paper describes a new divide and conquer approach that leverages big environmental data, utilizing all available categorical and time-series data without subjectivity, to empirically model hydrologic and water-quality behaviors across expansive regions. The approach decomposes large, intractable problems into smaller ones that are optimally solved; decomposes complex signals into behavioral components that are easier to model with "sub- models"; and employs a sequence of numerically optimizing algorithms that include time-series clustering, nonlinear, multivariate sensitivity analysis and predictive modeling using multi-layer perceptron artificial neural networks, and classification for selecting the best sub-models to make predictions at new sites. This approach has many advantages over traditional modeling approaches, including being faster and less expensive, more comprehensive in its use of available data, and more accurate in representing a system's physical processes. This paper describes the application of the approach to model groundwater levels in Florida, stream temperatures across Western Oregon and Wisconsin, and water depths in the Florida Everglades. ?? 2009 ASCE.
Specialty Task Force: A Strategic Component to Electronic Health Record (EHR) Optimization.
Romero, Mary Rachel; Staub, Allison
2016-01-01
Post-implementation stage comes after an electronic health record (EHR) deployment. Analyst and end users deal with the reality that some of the concepts and designs initially planned and created may not be complementary to the workflow; creating anxiety, dissatisfaction, and failure with early adoption of system. Problems encountered during deployment are numerous and can vary from simple to complex. Redundant ticket submission creates backlog for Information Technology personnel resulting in delays in resolving concerns with EHR system. The process of optimization allows for evaluation of system and reassessment of users' needs. A solid and well executed optimization infrastructure can help minimize unexpected end-user disruptions and help tailor the system to meet regulatory agency goals and practice standards. A well device plan to resolve problems during post implementation is necessary for cost containment and to streamline communication efforts. Creating a specialty specific collaborative task force is efficacious and expedites resolution of users' concerns through a more structured process.
Numerical Optimization Using Computer Experiments
NASA Technical Reports Server (NTRS)
Trosset, Michael W.; Torczon, Virginia
1997-01-01
Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.
The optimal design of UAV wing structure
NASA Astrophysics Data System (ADS)
Długosz, Adam; Klimek, Wiktor
2018-01-01
The paper presents an optimal design of UAV wing, made of composite materials. The aim of the optimization is to improve strength and stiffness together with reduction of the weight of the structure. Three different types of functionals, which depend on stress, stiffness and the total mass are defined. The paper presents an application of the in-house implementation of the evolutionary multi-objective algorithm in optimization of the UAV wing structure. Values of the functionals are calculated on the basis of results obtained from numerical simulations. Numerical FEM model, consisting of different composite materials is created. Adequacy of the numerical model is verified by results obtained from the experiment, performed on a tensile testing machine. Examples of multi-objective optimization by means of Pareto-optimal set of solutions are presented.
Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Xu, Yan; Tomsovic, Kevin
In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total costmore » of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.« less
Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization
Liu, Guodong; Xu, Yan; Tomsovic, Kevin
2016-01-01
In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total costmore » of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.« less
Optimal Control of Induction Machines to Minimize Transient Energy Losses
NASA Astrophysics Data System (ADS)
Plathottam, Siby Jose
Induction machines are electromechanical energy conversion devices comprised of a stator and a rotor. Torque is generated due to the interaction between the rotating magnetic field from the stator, and the current induced in the rotor conductors. Their speed and torque output can be precisely controlled by manipulating the magnitude, frequency, and phase of the three input sinusoidal voltage waveforms. Their ruggedness, low cost, and high efficiency have made them ubiquitous component of nearly every industrial application. Thus, even a small improvement in their energy efficient tend to give a large amount of electrical energy savings over the lifetime of the machine. Hence, increasing energy efficiency (reducing energy losses) in induction machines is a constrained optimization problem that has attracted attention from researchers. The energy conversion efficiency of induction machines depends on both the speed-torque operating point, as well as the input voltage waveform. It also depends on whether the machine is in the transient or steady state. Maximizing energy efficiency during steady state is a Static Optimization problem, that has been extensively studied, with commercial solutions available. On the other hand, improving energy efficiency during transients is a Dynamic Optimization problem that is sparsely studied. This dissertation exclusively focuses on improving energy efficiency during transients. This dissertation treats the transient energy loss minimization problem as an optimal control problem which consists of a dynamic model of the machine, and a cost functional. The rotor field oriented current fed model of the induction machine is selected as the dynamic model. The rotor speed and rotor d-axis flux are the state variables in the dynamic model. The stator currents referred to as d-and q-axis currents are the control inputs. A cost functional is proposed that assigns a cost to both the energy losses in the induction machine, as well as the deviations from desired speed-torque-magnetic flux setpoints. Using Pontryagin's minimum principle, a set of necessary conditions that must be satisfied by the optimal control trajectories are derived. The conditions are in the form a two-point boundary value problem, that can be solved numerically. The conjugate gradient method that was modified using the Hestenes-Stiefel formula was used to obtain the numerical solution of both the control and state trajectories. Using the distinctive shape of the numerical trajectories as inspiration, analytical expressions were derived for the state, and control trajectories. It was shown that the trajectory could be fully described by finding the solution of a one-dimensional optimization problem. The sensitivity of both the optimal trajectory and the optimal energy efficiency to different induction machine parameters were analyzed. A non-iterative solution that can use feedback for generating optimal control trajectories in real time was explored. It was found that an artificial neural network could be trained using the numerical solutions and made to emulate the optimal control trajectories with a high degree of accuracy. Hence a neural network along with a supervisory logic was implemented and used in a real-time simulation to control the Finite Element Method model of the induction machine. The results were compared with three other control regimes and the optimal control system was found to have the highest energy efficiency for the same drive cycle.
A sensitivity equation approach to shape optimization in fluid flows
NASA Technical Reports Server (NTRS)
Borggaard, Jeff; Burns, John
1994-01-01
A sensitivity equation method to shape optimization problems is applied. An algorithm is developed and tested on a problem of designing optimal forebody simulators for a 2D, inviscid supersonic flow. The algorithm uses a BFGS/Trust Region optimization scheme with sensitivities computed by numerically approximating the linear partial differential equations that determine the flow sensitivities. Numerical examples are presented to illustrate the method.
Numerical simulation of active track tensioning system for autonomous hybrid vehicle
NASA Astrophysics Data System (ADS)
Mȩżyk, Arkadiusz; Czapla, Tomasz; Klein, Wojciech; Mura, Gabriel
2017-05-01
One of the most important components of a high speed tracked vehicle is an efficient suspension system. The vehicle should be able to operate both in rough terrain for performance of engineering tasks as well as on the road with high speed. This is especially important for an autonomous platform that operates either with or without human supervision, so that the vibration level can rise compared to a manned vehicle. In this case critical electronic and electric parts must be protected to ensure the reliability of the vehicle. The paper presents a dynamic parameters determination methodology of suspension system for an autonomous high speed tracked platform with total weight of about 5 tonnes and hybrid propulsion system. Common among tracked vehicles suspension solutions and cost-efficient, the torsion-bar system was chosen. One of the most important issues was determining optimal track tensioning - in this case an active hydraulic system was applied. The selection of system parameters was performed with using numerical model based on multi-body dynamic approach. The results of numerical analysis were used to define parameters of active tensioning control system setup. LMS Virtual.Lab Motion was used for multi-body dynamics numerical calculation and Matlab/SIMULINK for control system simulation.
NASA Astrophysics Data System (ADS)
Davis, Joshua R.; Giorgis, Scott
2014-11-01
We describe a three-part approach for modeling shape preferred orientation (SPO) data of spheroidal clasts. The first part consists of criteria to determine whether a given SPO and clast shape are compatible. The second part is an algorithm for randomly generating spheroid populations that match a prescribed SPO and clast shape. In the third part, numerical optimization software is used to infer deformation from spheroid populations, by finding the deformation that returns a set of post-deformation spheroids to a minimally anisotropic initial configuration. Two numerical experiments explore the strengths and weaknesses of this approach, while giving information about the sensitivity of the model to noise in data. In monoclinic transpression of oblate rigid spheroids, the model is found to constrain the shortening component but not the simple shear component. This modeling approach is applied to previously published SPO data from the western Idaho shear zone, a monoclinic transpressional zone that deformed a feldspar megacrystic gneiss. Results suggest at most 5 km of shortening, as well as pre-deformation SPO fabric. The shortening estimate is corroborated by a second model that assumes no pre-deformation fabric.
Advances and trends in computational structural mechanics
NASA Technical Reports Server (NTRS)
Noor, A. K.
1986-01-01
Recent developments in computational structural mechanics are reviewed with reference to computational needs for future structures technology, advances in computational models for material behavior, discrete element technology, assessment and control of numerical simulations of structural response, hybrid analysis, and techniques for large-scale optimization. Research areas in computational structural mechanics which have high potential for meeting future technological needs are identified. These include prediction and analysis of the failure of structural components made of new materials, development of computational strategies and solution methodologies for large-scale structural calculations, and assessment of reliability and adaptive improvement of response predictions.
An overview of CAM: components and clinical uses.
Kiefer, David; Pitluk, Jessica; Klunk, Kathryn
2009-01-01
Complementary and alternative medicine (CAM), more recently known as integrative health or integrative medicine, is a diverse field comprising numerous treatments and practitioners of various levels of training. This review defines several of the main CAM modalities and reviews some of the research relevant to their clinical application. The goal is to provide healthcare providers with a basic understanding of CAM to start the incorporation of proven treatments into their clinical practice as well as guide them to working with CAM providers; ultimately, such knowledge is a fundamental part of a collaborative approach to optimal patient health and wellness.
Compact divided-pupil line-scanning confocal microscope for investigation of human tissues
NASA Astrophysics Data System (ADS)
Glazowski, Christopher; Peterson, Gary; Rajadhyaksha, Milind
2013-03-01
Divided-pupil line-scanning confocal microscopy (DPLSCM) can provide a simple and low-cost approach for imaging of human tissues with pathology-like nuclear and cellular detail. Using results from a multidimensional numerical model of DPLSCM, we found optimal pupil configurations for improved axial sectioning, as well as control of speckle noise in the case of reflectance imaging. The modeling results guided the design and construction of a simple (10 component) microscope, packaged within the footprint of an iPhone, and capable of cellular resolution. We present the optical design with experimental video-images of in-vivo human tissues.
A numerical study of mixing enhancement in supersonic reacting flow fields. [in scramjets
NASA Technical Reports Server (NTRS)
Drummond, J. Philip; Mukunda, H. S.
1988-01-01
NASA Langley has intensively investigated the components of ramjet and scramjet systems for endoatmospheric, airbreathing hypersonic propulsion; attention is presently given to the optimization of scramjet combustor fuel-air mixing and reaction characteristics. A supersonic, spatially developing and reacting mixing layer has been found to serve as an excellent physical model for the mixing and reaction process. Attention is presently given to techniques that have been applied to the enhancement of the mixing processes and the overall combustion efficiency of the mixing layer. A fuel injector configuration has been computationally designed which significantly increases mixing and reaction rates.
Light scattering optimization of chitin random network in ultrawhite beetle scales
NASA Astrophysics Data System (ADS)
Utel, Francesco; Cortese, Lorenzo; Pattelli, Lorenzo; Burresi, Matteo; Vignolini, Silvia; Wiersma, Diederik
2017-09-01
Among the natural white colored photonics structures, a bio-system has become of great interest in the field of disordered optical media: the scale of the white beetle Chyphochilus. Despite its low thickness, on average 7 μm, and low refractive index, this beetle exhibits extreme high brightness and unique whiteness. These properties arise from the interaction of light with a complex network of chitin nano filaments embedded in the interior of the scales. As it's been recently claimed, this could be a consequence of the peculiar morphology of the filaments network that, by means of high filling fraction (0.61) and structural anisotropy, optimizes the multiple scattering of light. We therefore performed a numerical analysis on the structural properties of the chitin network in order to understand their role in the enhancement of the scale scattering intensity. Modeling the filaments as interconnected rod shaped scattering centers, we numerically generated the spatial coordinates of the network components. Controlling the quantities that are claimed to play a fundamental role in the brightness and whiteness properties of the investigated system (filling fraction and average rods orientation, i.e. the anisotropy of the ensemble of scattering centers), we obtained a set of customized random networks. FDTD simulations of light transport have been performed on these systems, observing high reflectance for all the visible frequencies and proving the implemented algorithm to numerically generate the structures is suitable to investigate the dependence of reflectance by anisotropy.
Computing the optimal path in stochastic dynamical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauver, Martha; Forgoston, Eric, E-mail: eric.forgoston@montclair.edu; Billings, Lora
2016-08-15
In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensionalmore » system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.« less
A multi-resolution approach for optimal mass transport
NASA Astrophysics Data System (ADS)
Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen
2007-09-01
Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.
Optimality conditions for the numerical solution of optimization problems with PDE constraints :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro; Ridzal, Denis
2014-03-01
A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.
NASA Technical Reports Server (NTRS)
Chuang, C.-H.; Goodson, Troy D.; Ledsinger, Laura A.
1995-01-01
This report describes current work in the numerical computation of multiple burn, fuel-optimal orbit transfers and presents an analysis of the second variation for extremal multiple burn orbital transfers as well as a discussion of a guidance scheme which may be implemented for such transfers. The discussion of numerical computation focuses on the use of multivariate interpolation to aid the computation in the numerical optimization. The second variation analysis includes the development of the conditions for the examination of both fixed and free final time transfers. Evaluations for fixed final time are presented for extremal one, two, and three burn solutions of the first variation. The free final time problem is considered for an extremal two burn solution. In addition, corresponding changes of the second variation formulation over thrust arcs and coast arcs are included. The guidance scheme discussed is an implicit scheme which implements a neighboring optimal feedback guidance strategy to calculate both thrust direction and thrust on-off times.
Maslia, M.L.; Randolph, R.B.
1986-01-01
The theory of anisotropic aquifer hydraulic properties and a computer program, written in Fortran 77, developed to compute the components of the anisotropic transmissivity tensor of two-dimensional groundwater flow are described. To determine the tensor components using one pumping well and three observation wells, the type-curve and straight-line approximation methods are developed. These methods are based on the equation of drawdown developed for two-dimensional nonsteady flow in an infinite anisotropic aquifer. To determine tensor components using more than three observation wells, a weighted least squares optimization procedure is described for use with the type-curve and straight-line approximation methods. The computer program described in this report allows the type-curve, straight-line approximation, and weighted least squares optimization methods to be used in conjunction with data from observation and pumping wells. Three example applications using the computer program and field data gathered during geohydrologic investigations at a site near Dawsonville, Georgia , are provided to illustrate the use of the computer program. The example applications demonstrate the use of the type-curve method using three observation wells, the weighted least squares optimization method using eight observation wells and equal weighting, and the weighted least squares optimization method using eight observation wells and unequal weighting. Results obtained using the computer program indicate major transmissivity in the range of 347-296 sq ft/day, minor transmissivity in the range of 139-99 sq ft/day, aquifer anisotropy in the range of 3.54 to 2.14, principal direction of flow in the range of N. 45.9 degrees E. to N. 58.7 degrees E., and storage coefficient in the range of 0.0063 to 0.0037. The numerical results are in good agreement with field data gathered on the weathered crystalline rocks underlying the investigation site. Supplemental material provides definitions of variables, data requirements and corresponding formats, input data and output results for the example applications, and a listing of the Fortran 77 computer code. (Author 's abstract)
Enhancing high-order harmonic generation by sculpting waveforms with chirp
NASA Astrophysics Data System (ADS)
Peng, Dian; Frolov, M. V.; Pi, Liang-Wen; Starace, Anthony F.
2018-05-01
We present a theoretical analysis showing how chirp can be used to sculpt two-color driving laser field waveforms in order to enhance high-order harmonic generation (HHG) and/or extend HHG cutoff energies. Specifically, we consider driving laser field waveforms composed of two ultrashort pulses having different carrier frequencies in each of which a linear chirp is introduced. Two pairs of carrier frequencies of the component pulses are considered: (ω , 2 ω ) and (ω , 3 ω ). Our results show how changing the signs of the chirps in each of the two component pulses leads to drastic changes in the HHG spectra. Our theoretical analysis is based on numerical solutions of the time-dependent Schrödinger equation and on a semiclassical analytical approach that affords a clear physical interpretation of how our optimized waveforms lead to enhanced HHG spectra.
Chen, Ying-ping; Chen, Chao-Hong
2010-01-01
An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.
Numerical simulation of several impact attenuator design for a formula student car
NASA Astrophysics Data System (ADS)
Sinaga, Farlian Rizky; Ubaidillah, Kurniawan, Krishna Eka; Fadhil, Muhamad Ivan; Cahyono, Sukmaji Indro; Idris, Muhamad Hafiz
2018-02-01
In the Formula Society of Automotive Engineer (SAE), safety is a vigorous factor. One of the safety components in the Formula SAE car is the impact attenuator. The purpose of this study is to get the impact attenuator design with the best ability to absorb kinetic energy from several existing designs, through numerical approaches, for estimating conditions against dynamic impacts. Material of impact attenuator use combination of aluminum and Zirconium G350. The simulation was caried out by crashing the impact with the rigid wall, to find the deformation that occurs and the energies are absorbed. The impact attenuator design to be simulated should be optimized to meet some parameters in the SAE Formula. The result of impact attenuator simulation should be able to absorb energy of 7350 joules at move 7 m/s and deformation at bulkhead less than 25.4 mm.
NASA Astrophysics Data System (ADS)
Chen, L. P.; He, L. P.; Chen, D. C.; Lu, G.; Li, W. J.; Yuan, J. M.
2017-01-01
The warpage deformation plays an important role on the performance of automobile interior components fabricated with natural fiber reinforced composites. The present work investigated the influence of process parameters on the warpage behavior of A pillar trim made of ramie fiber (RF) reinforced polypropylene (PP) composites (RF/PP) via numerical simulation with orthogonal experiment method and range analysis. The results indicated that fiber addition and packing pressure were the most important factors affecting warpage. The A pillar trim can achieved the minimum warpage value as of 2.124 mm under the optimum parameters. The optimal process parameters are: 70% percent of the default value of injection pressure for the packing pressure, 20 wt% for the fiber addition, 185 °C for the melt °C for the mold temperature, 7 s for the filling time and 17 s for the packing time.
Design optimization of a vaneless ``fish-friendly'' swirl injector for small water turbines
NASA Astrophysics Data System (ADS)
Airody, Ajith; Peterson, Sean D.
2015-11-01
Small-scale hydro-electric plants are attractive options for powering remote sites, as they draw energy from local bodies of water. However, the environmental impact on the aquatic life drawn into the water turbine is a concern. To mitigate adverse consequences on the local fauna, small-scale water turbine design efforts have focused on developing ``fish-friendly'' facilities. The components of these turbines tend to have wider passages between the blades when compared to traditional turbines, and the rotors are designed to spin at much lower angular velocities, thus allowing fish to pass through safely. Galt Green Energy has proposed a vaneless casing that provides the swirl component to the flow approaching the rotor, eliminating the need for inlet guide vanes. We numerically model the flow through the casing using ANSYS CFX to assess the evolution of the axial and circumferential velocity symmetry and uniformity in various cross-sections within and downstream of the injector. The velocity distributions, as well as the pressure loss through the injector, are functions of the pitch angle and number of revolutions of the casing. Optimization of the casing design is discussed via an objective function consisting of the velocity and pressure performance measures.
How Many Environmental Impact Indicators Are Needed in the Evaluation of Product Life Cycles?
Steinmann, Zoran J N; Schipper, Aafke M; Hauck, Mara; Huijbregts, Mark A J
2016-04-05
Numerous indicators are currently available for environmental impact assessments, especially in the field of Life Cycle Impact Assessment (LCIA). Because decision-making on the basis of hundreds of indicators simultaneously is unfeasible, a nonredundant key set of indicators representative of the overall environmental impact is needed. We aimed to find such a nonredundant set of indicators based on their mutual correlations. We have used Principal Component Analysis (PCA) in combination with an optimization algorithm to find an optimal set of indicators out of 135 impact indicators calculated for 976 products from the ecoinvent database. The first four principal components covered 92% of the variance in product rankings, showing the potential for indicator reduction. The same amount of variance (92%) could be covered by a minimal set of six indicators, related to climate change, ozone depletion, the combined effects of acidification and eutrophication, terrestrial ecotoxicity, marine ecotoxicity, and land use. In comparison, four commonly used resource footprints (energy, water, land, materials) together accounted for 84% of the variance in product rankings. We conclude that the plethora of environmental indicators can be reduced to a small key set, representing the major part of the variation in environmental impacts between product life cycles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deschaine, L.M.; Chalmers Univ. of Technology, Dept. of Physical Resources, Complex Systems Group, Goteborg
2008-07-01
The global impact to human health and the environment from large scale chemical / radionuclide releases is well documented. Examples are the wide spread release of radionuclides from the Chernobyl nuclear reactors, the mobilization of arsenic in Bangladesh, the formation of Environmental Protection Agencies in the United States, Canada and Europe, and the like. The fiscal costs of addressing and remediating these issues on a global scale are astronomical, but then so are the fiscal and human health costs of ignoring them. An integrated methodology for optimizing the response(s) to these issues is needed. This work addresses development of optimalmore » policy design for large scale, complex, environmental issues. It discusses the development, capabilities, and application of a hybrid system of algorithms that optimizes the environmental response. It is important to note that 'optimization' does not singularly refer to cost minimization, but to the effective and efficient balance of cost, performance, risk, management, and societal priorities along with uncertainty analysis. This tool integrates all of these elements into a single decision framework. It provides a consistent approach to designing optimal solutions that are tractable, traceable, and defensible. The system is modular and scalable. It can be applied either as individual components or in total. By developing the approach in a complex systems framework, a solution methodology represents a significant improvement over the non-optimal 'trial and error' approach to environmental response(s). Subsurface environmental processes are represented by linear and non-linear, elliptic and parabolic equations. The state equations solved using numerical methods include multi-phase flow (water, soil gas, NAPL), and multicomponent transport (radionuclides, heavy metals, volatile organics, explosives, etc.). Genetic programming is used to generate the simulators either when simulation models do not exist, or to extend the accuracy of them. The uncertainty and sparse nature of information in earth science simulations necessitate stochastic representations. For discussion purposes, the solution to these site-wide challenges is divided into three sub-components; plume finding, long term monitoring, and site-wide remediation. Plume finding is the optimal estimation of the plume fringe(s) at a specified time. It is optimized by fusing geo-stochastic flow and transport simulations with the information content of data using a Kalman filter. The result is an optimal monitoring sensor network; the decision variable is location(s) of sensor in three dimensions. Long term monitoring extends this approach concept, and integrates the spatial-time correlations to optimize the decision variables of where to sample and when to sample over the project life cycle. Optimization of location and timing of samples to meet the desired accuracy of temporal plume movement is accomplished using enumeration or genetic algorithms. The remediation optimization solves the multi-component, multiphase system of equations and incorporates constraints on life-cycle costs, maximum annual costs, maximum allowable annual discharge (for assessing the monitored natural attenuation solution) and constraints on where remedial system component(s) can be located, including management overrides to force certain solutions to be chosen are incorporated for solution design. It uses a suite of optimization techniques, including the outer approximation method, Lipchitz global optimization, genetic algorithms, and the like. The automated optimal remedial design algorithm requires a stable simulator be available for the simulated process. This is commonly the case for all above specifications sans true three-dimensional multiphase flow. Much work is currently being conducted in the industry to develop stable 3D, three-phase simulators. If needed, an interim heuristic algorithm is available to get close to optimal for these conditions. This system process provides the full capability to optimize multi-source, multiphase, and multicomponent sites. The results of applying just components of these algorithms have produced predicted savings of as much as $90,000,000(US), when compared to alternative solutions. Investment in a pilot program to test the model saved 100% of the $20,000,000 predicted for the smaller test implementation. This was done without loss of effectiveness, and received an award from the Vice President - and now Nobel peace prize winner - Al Gore of the United States. (authors)« less
Automated Calibration For Numerical Models Of Riverflow
NASA Astrophysics Data System (ADS)
Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey
2017-04-01
Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.
NASA Astrophysics Data System (ADS)
Terachi, Yusuke; Terao, Yutaka; Ohsaki, Hiroyuki; Sakurai, Yuki; Matsumura, Tomotake; Sugai, Hajime; Utsunomiya, Shin; Kataza, Hirokazu; Yamamoto, Ryo
2017-07-01
We have carried out numerical analysis of mechanical properties of a superconducting magnetic bearing (SMB). A contactless bearing operating at below 10 K with low rotational energy loss is an attractive feature to be used as a rotational mechanism of a polarization modulator for a cosmic microwave background experiment. In such application, a rotor diameter of about 400 mm forces us to employ a segmented magnet. As a result, there is inevitable spatial gap between the segments. In order to understand the path towards the design optimizations, 2D and 3D FEM analyses were carried out to examine fundamental characteristics of the SMBs for a polarization modulator. Two axial flux type SMBs were dealt with in the analysis: (a) the SMB with axially magnetized permanent magnets (PMs), and (b) the SMB with radially magnetized PMs and steel components for magnetic flux paths. Magnetic flux lines and density distributions, electromagnetic force characteristics, spring constants, etc. were compared among some variations of the SMBs. From the numerical analysis results, it is discussed what type, configuration and design of SMBs are more suitable for a polarization modulator.
Dynamic one-dimensional modeling of secondary settling tanks and design impacts of sizing decisions.
Li, Ben; Stenstrom, Michael K
2014-03-01
As one of the most significant components in the activated sludge process (ASP), secondary settling tanks (SSTs) can be investigated with mathematical models to optimize design and operation. This paper takes a new look at the one-dimensional (1-D) SST model by analyzing and considering the impacts of numerical problems, especially the process robustness. An improved SST model with Yee-Roe-Davis technique as the PDE solver is proposed and compared with the widely used Takács model to show its improvement in numerical solution quality. The improved and Takács models are coupled with a bioreactor model to reevaluate ASP design basis and several popular control strategies for economic plausibility, contaminant removal efficiency and system robustness. The time-to-failure due to rising sludge blanket during overloading, as a key robustness indicator, is analyzed to demonstrate the differences caused by numerical issues in SST models. The calculated results indicate that the Takács model significantly underestimates time to failure, thus leading to a conservative design. Copyright © 2013 Elsevier Ltd. All rights reserved.
Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.
2018-01-01
Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Ge, Fuying
1989-01-01
Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.
NASA Technical Reports Server (NTRS)
Carpenter, William C.
1991-01-01
Engineering optimization problems involve minimizing some function subject to constraints. In areas such as aircraft optimization, the constraint equations may be from numerous disciplines such as transfer of information between these disciplines and the optimization algorithm. They are also suited to problems which may require numerous re-optimizations such as in multi-objective function optimization or to problems where the design space contains numerous local minima, thus requiring repeated optimizations from different initial designs. Their use has been limited, however, by the fact that development of response surfaces randomly selected or preselected points in the design space. Thus, they have been thought to be inefficient compared to algorithms to the optimum solution. A development has taken place in the last several years which may effect the desirability of using response surfaces. It may be possible that artificial neural nets are more efficient in developing response surfaces than polynomial approximations which have been used in the past. This development is the concern of the work.
Numerical prediction of turbulent oscillating flow and associated heat transfer
NASA Technical Reports Server (NTRS)
Koehler, W. J.; Patankar, S. V.; Ibele, W. E.
1991-01-01
A crucial point for further development of engines is the optimization of its heat exchangers which operate under oscillatory flow conditions. It has been found that the most important thermodynamic uncertainties in the Stirling engine designs for space power are in the heat transfer between gas and metal in all engine components and in the pressure drop across the heat exchanger components. So far, performance codes cannot predict the power output of a Stirling engine reasonably enough if used for a wide variety of engines. Thus, there is a strong need for better performance codes. However, a performance code is not concerned with the details of the flow. This information must be provided externally. While analytical relationships exist for laminar oscillating flow, there has been hardly any information about transitional and turbulent oscillating flow, which could be introduced into the performance codes. In 1986, a survey by Seume and Simon revealed that most Stirling engine heat exchangers operate in the transitional and turbulent regime. Consequently, research has since focused on the unresolved issue of transitional and turbulent oscillating flow and heat transfer. Since 1988, the University of Minnesota oscillating flow facility has obtained experimental data about transitional and turbulent oscillating flow. However, since the experiments in this field are extremely difficult, lengthy, and expensive, it is advantageous to numerically simulate the flow and heat transfer accurately from first principles. Work done at the University of Minnesota on the development of such a numerical simulation is summarized.
Meteorology Research in DOE's Atmosphere to Electrons (A2e) Program
NASA Astrophysics Data System (ADS)
Cline, J.; Haupt, S. E.; Shaw, W. J.
2017-12-01
DOE's Atmosphere to electrons (A2e) program is performing cutting edge research to allow optimization of wind plants. This talk will summarize the atmospheric science portion of A2e, with an overview of recent and planned observation and modeling projects designed to bridge the terra incognita between the mesoscale and the microscales that affect wind plants. Introduction A2e is a major focus of the Wind Energy Technologies Office (WETO) within the Office of Energy Efficiency & Renewable Energy (EERE) at the DOE. The overall objective of A2e is to optimize wind power production and integrates improved knowledge of atmospheric inflow (fuel), turbine and plant aerodynamics, and control systems. The atmospheric component of the work addresses both the need for improved forecasting of hub-height winds and the need for improved turbulence characterization for turbine inflows under realistic atmospheric conditions and terrain. Several projects will be discussed to address observations of meteorological variables in regions not typically observed. The modelling needs are addressed through major multi-institutional integrated studies comprising both theoretical and numerical advances to improve models and field observations for physical insight. Model improvements are subjected to formal verification and validation, and numerical and observational data are archived and disseminated to the public through the A2e Data Archive and Portal (DAP; http://a2e.energy.gov). The overall outcome of this work will be increased annual energy production from wind plants and improved turbine lifetimes through a better understanding of atmospheric loading. We will briefly describe major components of the atmospheric part of the A2e strategy and work being done and planned.
In Silico Design of DNP Polarizing Agents: Can Current Dinitroxides Be Improved?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perras, Frédéric A.; Sadow, Aaron; Pruski, Marek
Numerical calculations of enhancement factors offered by dynamic nuclear polarization in solids under magic angle spinning (DNP-MAS) were performed to determine the optimal EPR parameters for a dinitroxide polarizing agent. We found that the DNP performance of a biradical is more tolerant to the relative orientation of the two nitroxide moieties than previously thought. In general, any condition in which the gyy tensor components of both radicals are perpendicular to one another is expected to have near-optimal DNP performance. These results highlight the important role of the exchange coupling, which can lessen the sensitivity of DNP performance to the inter-radicalmore » distance, but also lead to lower enhancements when the number of atoms in the linker becomes less than three. Finally, the calculations showed that the electron T1e value should be near 500μs to yield optimal performance. Importantly, the newest polarizing agents already feature all of the qualities of the optimal polarizing agent, leaving little room for further improvement. Further research into DNP polarizing agents should then target non-nitroxide radicals, as well as improvements in sample formulations to advance high-temperature DNP and limit quenching and reactivity.« less
In Silico Design of DNP Polarizing Agents: Can Current Dinitroxides Be Improved?
Perras, Frédéric A.; Sadow, Aaron; Pruski, Marek
2017-06-09
Numerical calculations of enhancement factors offered by dynamic nuclear polarization in solids under magic angle spinning (DNP-MAS) were performed to determine the optimal EPR parameters for a dinitroxide polarizing agent. We found that the DNP performance of a biradical is more tolerant to the relative orientation of the two nitroxide moieties than previously thought. In general, any condition in which the gyy tensor components of both radicals are perpendicular to one another is expected to have near-optimal DNP performance. These results highlight the important role of the exchange coupling, which can lessen the sensitivity of DNP performance to the inter-radicalmore » distance, but also lead to lower enhancements when the number of atoms in the linker becomes less than three. Finally, the calculations showed that the electron T1e value should be near 500μs to yield optimal performance. Importantly, the newest polarizing agents already feature all of the qualities of the optimal polarizing agent, leaving little room for further improvement. Further research into DNP polarizing agents should then target non-nitroxide radicals, as well as improvements in sample formulations to advance high-temperature DNP and limit quenching and reactivity.« less
NASA Astrophysics Data System (ADS)
Tan, Zhukui; Xie, Baiming; Zhao, Yuanliang; Dou, Jinyue; Yan, Tong; Liu, Bin; Zeng, Ming
2018-06-01
This paper presents a new integrated planning framework for effective accommodating electric vehicles in smart distribution systems (SDS). The proposed method incorporates various investment options available for the utility collectively, including distributed generation (DG), capacitors and network reinforcement. Using a back-propagation algorithm combined with cost-benefit analysis, the optimal network upgrade plan, allocation and sizing of the selected components are determined, with the purpose of minimizing the total system capital and operating costs of DG and EV accommodation. Furthermore, a new iterative reliability test method is proposed. It can check the optimization results by subsequently simulating the reliability level of the planning scheme, and modify the generation reserve margin to guarantee acceptable adequacy levels for each year of the planning horizon. Numerical results based on a 32-bus distribution system verify the effectiveness of the proposed method.
Elements of an algorithm for optimizing a parameter-structural neural network
NASA Astrophysics Data System (ADS)
Mrówczyńska, Maria
2016-06-01
The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.
Performance limitations of translationally symmetric nonimaging devices
NASA Astrophysics Data System (ADS)
Bortz, John C.; Shatz, Narkis E.; Winston, Roland
2001-11-01
The component of the optical direction vector along the symmetry axis is conserved for all rays propagated through a translationally symmetric optical device. This quality, referred to herein as the translational skew invariant, is analogous to the conventional skew invariant, which is conserved in rotationally symmetric optical systems. The invariance of both of these quantities is a consequence of Noether's theorem. We show how performance limits for translationally symmetric nonimaging optical devices can be derived from the distributions of the translational skew invariant for the optical source and for the target to which flux is to be transferred. Examples of computed performance limits are provided. In addition, we show that a numerically optimized non-tracking solar concentrator utilizing symmetry-breaking surface microstructure can overcome the performance limits associated with translational symmetry. The optimized design provides a 47.4% increase in efficiency and concentration relative to an ideal translationally symmetric concentrator.
NASA Astrophysics Data System (ADS)
Harris, S.; Labahn, J. W.; Frank, J. H.; Ihme, M.
2017-11-01
Data assimilation techniques can be integrated with time-resolved numerical simulations to improve predictions of transient phenomena. In this study, optimal interpolation and nudging are employed for assimilating high-speed high-resolution measurements obtained for an inert jet into high-fidelity large-eddy simulations. This experimental data set was chosen as it provides both high spacial and temporal resolution for the three-component velocity field in the shear layer of the jet. Our first objective is to investigate the impact that data assimilation has on the resulting flow field for this inert jet. This is accomplished by determining the region influenced by the data assimilation and corresponding effect on the instantaneous flow structures. The second objective is to determine optimal weightings for two data assimilation techniques. The third objective is to investigate how the frequency at which the data is assimilated affects the overall predictions. Graduate Research Assistant, Department of Mechanical Engineering.
NASA Astrophysics Data System (ADS)
Li, Kai; Liu, Jun; Liu, Weiqiang
2017-04-01
As a novel thermal protection technique for hypersonic vehicles, Magnetohydrodynamic (MHD) heat shield system has been proved to be of great intrinsic value in the hypersonic field. In order to analyze the thermal protection mechanisms of such a system, a physical model is constructed for analyzing the effect of the Lorentz force components in the counter and normal directions. With a series of numerical simulations, the dominating Lorentz force components are analyzed for the MHD heat flux mitigation in different regions of a typical reentry vehicle. Then, a novel magnetic field with variable included angle between magnetic induction line and streamline is designed, which significantly improves the performance of MHD thermal protection in the stagnation and shoulder areas. After that, the relationships between MHD shock control and MHD thermal protection are investigated, based on which the magnetic field above is secondarily optimized obtaining better performances of both shock control and thermal protection. Results show that the MHD thermal protection is mainly determined by the Lorentz force's effect on the boundary layer. From the stagnation to the shoulder region, the flow deceleration effect of the counter-flow component is weakened while the flow deflection effect of the normal component is enhanced. Moreover, there is no obviously positive correlation between the MHD shock control and thermal protection. But once a good Lorentz force's effect on the boundary layer is guaranteed, the thermal protection performance can be further improved with an enlarged shock stand-off distance by strengthening the counter-flow Lorentz force right after shock.
Numerical solution of a conspicuous consumption model with constant control delay☆
Huschto, Tony; Feichtinger, Gustav; Hartl, Richard F.; Kort, Peter M.; Sager, Sebastian; Seidl, Andrea
2011-01-01
We derive optimal pricing strategies for conspicuous consumption products in periods of recession. To that end, we formulate and investigate a two-stage economic optimal control problem that takes uncertainty of the recession period length and delay effects of the pricing strategy into account. This non-standard optimal control problem is difficult to solve analytically, and solutions depend on the variable model parameters. Therefore, we use a numerical result-driven approach. We propose a structure-exploiting direct method for optimal control to solve this challenging optimization problem. In particular, we discretize the uncertainties in the model formulation by using scenario trees and target the control delays by introduction of slack control functions. Numerical results illustrate the validity of our approach and show the impact of uncertainties and delay effects on optimal economic strategies. During the recession, delayed optimal prices are higher than the non-delayed ones. In the normal economic period, however, this effect is reversed and optimal prices with a delayed impact are smaller compared to the non-delayed case. PMID:22267871
NASA Technical Reports Server (NTRS)
Thareja, R.; Haftka, R. T.
1986-01-01
There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Amours, Guillaume; Rahem, Ahmed; Williams, Bruce
2007-05-17
The automotive industry, with an increasing demand to reduce vehicle weight through the adoption of lightweight materials, requires a search of efficient methods that suit these materials. One attractive concept is to use hydroforming of aluminium tubes. By using FE simulations, the process can be optimized to reduce the risk for failure while maintaining energy absorption and component integrity under crash conditions. It is important to capture the level of residual ductility after forming to allow proper design for crashworthiness. This paper presents numerical and experimental studies that have been carried out for high pressure hydroforming operations to study themore » influence of the tube corner radius, end feeding, material thinning, and work hardening in 76.2 mm diameter, 3 mm wall thickness AA5754 aluminium alloy tube. End feeding was used to increase the formability of the tubes. The influence of the end feed displacement versus tube forming pressure schedule was studied to optimize the forming process operation to reduce thinning. Validation of the numerical simulations was performed by comparison of the predicted strain distributions and thinning, with measured quantities. The effect of element formulation (thin shell versus solid elements) was also considered in the models.« less
Quadratic Optimization in the Problems of Active Control of Sound
NASA Technical Reports Server (NTRS)
Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).
2016-01-22
Numerical electromagnetic simulations based on the multilevel fast multipole method (MLFMM) were used to analyze and optimize the antenna...and are not necessarily endorsed by the United States Government. numerical simulations with the multilevel fast multipole method (MLFMM...and optimized using numerical simulations conducted with the multilevel fast multipole method (MLFMM) using FEKO software (www.feko.info). The
Optimization of the Bridgman crystal growth process
NASA Astrophysics Data System (ADS)
Margulies, M.; Witomski, P.; Duffar, T.
2004-05-01
A numerical optimization method of the vertical Bridgman growth configuration is presented and developed. It permits to optimize the furnace temperature field and the pulling rate versus time in order to decrease the radial thermal gradients in the sample. Some constraints are also included in order to insure physically realistic results. The model includes the two classical non-linearities associated to crystal growth processes, the radiative thermal exchange and the release of latent heat at the solid-liquid interface. The mathematical analysis and development of the problem is shortly described. On some examples, it is shown that the method works in a satisfactory way; however the results are dependent on the numerical parameters. Improvements of the optimization model, on the physical and numerical point of view, are suggested.
Yin, Changchuan
2015-04-01
To apply digital signal processing (DSP) methods to analyze DNA sequences, the sequences first must be specially mapped into numerical sequences. Thus, effective numerical mappings of DNA sequences play key roles in the effectiveness of DSP-based methods such as exon prediction. Despite numerous mappings of symbolic DNA sequences to numerical series, the existing mapping methods do not include the genetic coding features of DNA sequences. We present a novel numerical representation of DNA sequences using genetic codon context (GCC) in which the numerical values are optimized by simulation annealing to maximize the 3-periodicity signal to noise ratio (SNR). The optimized GCC representation is then applied in exon and intron prediction by Short-Time Fourier Transform (STFT) approach. The results show the GCC method enhances the SNR values of exon sequences and thus increases the accuracy of predicting protein coding regions in genomes compared with the commonly used 4D binary representation. In addition, this study offers a novel way to reveal specific features of DNA sequences by optimizing numerical mappings of symbolic DNA sequences.
Reliable numerical computation in an optimal output-feedback design
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1991-01-01
A reliable algorithm is presented for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is a part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. This approach through the use of an accurate Pade series approximation does not require the closed-loop system matrix to be diagonalizable. The algorithm was included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm was demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.
Open pit mining profit maximization considering selling stage and waste rehabilitation cost
NASA Astrophysics Data System (ADS)
Muttaqin, B. I. A.; Rosyidi, C. N.
2017-11-01
In open pit mining activities, determination of the cut-off grade becomes crucial for the company since the cut-off grade affects how much profit will be earned for the mining company. In this study, we developed a cut-off grade determination mode for the open pit mining industry considering the cost of mining, waste removal (rehabilitation) cost, processing cost, fixed cost, and selling stage cost. The main goal of this study is to develop a model of cut-off grade determination to get the maximum total profit. Secondly, this study is also developed to observe the model of sensitivity based on changes in the cost components. The optimization results show that the models can help mining company managers to determine the optimal cut-off grade and also estimate how much profit that can be earned by the mining company. To illustrate the application of the models, a numerical example and a set of sensitivity analysis are presented. From the results of sensitivity analysis, we conclude that the changes in the sales price greatly affects the optimal cut-off value and the total profit.
NASA Astrophysics Data System (ADS)
Yu, Jonas C. P.; Wee, H. M.; Yang, P. C.; Wu, Simon
2016-06-01
One of the supply chain risks for hi-tech products is the result of rapid technological innovation; it results in a significant decline in the selling price and demand after the initial launch period. Hi-tech products include computers and communication consumer's products. From a practical standpoint, a more realistic replenishment policy is needed to consider the impact of risks; especially when some portions of shortages are lost. In this paper, suboptimal and optimal order policies with partial backordering are developed for a buyer when the component cost, the selling price, and the demand rate decline at a continuous rate. Two mathematical models are derived and discussed: one model has the suboptimal solution with the fixed replenishment interval and a simpler computational process; the other one has the optimal solution with the varying replenishment interval and a more complicated computational process. The second model results in more profit. Numerical examples are provided to illustrate the two replenishment models. Sensitivity analysis is carried out to investigate the relationship between the parameters and the net profit.
Numerical research of the optimal control problem in the semi-Markov inventory model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorshenin, Andrey K.; Belousov, Vasily V.; Shnourkoff, Peter V.
2015-03-10
This paper is devoted to the numerical simulation of stochastic system for inventory management products using controlled semi-Markov process. The results of a special software for the system’s research and finding the optimal control are presented.
OPTIMIZATION OF COUNTERCURRENT STAGED PROCESSES.
CHEMICAL ENGINEERING , OPTIMIZATION), (*DISTILLATION, OPTIMIZATION), INDUSTRIAL PRODUCTION, INDUSTRIAL EQUIPMENT, MATHEMATICAL MODELS, DIFFERENCE EQUATIONS, NONLINEAR PROGRAMMING, BOUNDARY VALUE PROBLEMS, NUMERICAL INTEGRATION
The pre-image problem in kernel methods.
Kwok, James Tin-yau; Tsang, Ivor Wai-hung
2004-11-01
In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.
Performance improvements of symmetry-breaking reflector structures in nonimaging devices
Winston, Roland
2004-01-13
A structure and method for providing a broken symmetry reflector structure for a solar concentrator device. The component of the optical direction vector along the symmetry axis is conserved for all rays propagated through a translationally symmetric optical device. This quantity, referred to as the translational skew invariant, is conserved in rotationally symmetric optical systems. Performance limits for translationally symmetric nonimaging optical devices are derived from the distributions of the translational skew invariant for the optical source and for the target to which flux is to be transferred. A numerically optimized non-tracking solar concentrator utilizing symmetry-breaking reflector structures can overcome the performance limits associated with translational symmetry.
Effect of the mobility on (I-V) characteristics of the MOSFET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benzaoui, Ouassila, E-mail: o-benzaoui@yahoo.fr; Azizi, Cherifa, E-mail: aziziche@yahoo.fr
2013-12-16
MOSFET Transistor was the subject of many studies and research works (electronics, data-processing, telecommunications...) in order to exploit its interesting and promising characteristics. The aim of this contribution is devoted to the effect of the mobility on the static characteristics I-V of the MOSFET. The study enables us to calculate the drain current as function of bias in both linear and saturated modes; this effect is evaluated using a numerical simulation program. The influence of mobility was studied. Obtained results allow us to determine the mobility law in the MOSFET which gives optimal (I-V) characteristics of the component.
Mean-Reverting Portfolio With Budget Constraint
NASA Astrophysics Data System (ADS)
Zhao, Ziping; Palomar, Daniel P.
2018-05-01
This paper considers the mean-reverting portfolio design problem arising from statistical arbitrage in the financial markets. We first propose a general problem formulation aimed at finding a portfolio of underlying component assets by optimizing a mean-reversion criterion characterizing the mean-reversion strength, taking into consideration the variance of the portfolio and an investment budget constraint. Then several specific problems are considered based on the general formulation, and efficient algorithms are proposed. Numerical results on both synthetic and market data show that our proposed mean-reverting portfolio design methods can generate consistent profits and outperform the traditional design methods and the benchmark methods in the literature.
Sabau, Adrian S.
2016-04-22
Modeling and simulation of multiphysical phenomena needs to be considered in the design and optimization of mechanical properties of cast components in order to accelerate the introduction of new cast alloys. The data on casting defects, including microstructure features, is crucial for evaluating the final performance-related properties of the component. Here in this paper, the required models for the prediction of interdendritic casting defects, such as microporosity and hot tears, are reviewed. The data on calculated solidification shrinkage is presented and its effects on microporosity levels discussed. Numerical simulation results for microporosity are presented for A356, 356 and 319 aluminummore » alloys using ProCAST TM software. The calculated pressure drop of the interdendritic liquid was observed to be quite significant and the regions of high-pressure drop can be used as an indicator of the severity of interdendritic microporosity defects.« less
Sub-one-third wavelength focusing of surface plasmon polaritons excited by linearly polarized light.
Wang, Jiayuan; Zhang, Jiasen
2018-05-28
We report the generation of a subwavelength focal spot for surface plasmon polaritons (SPPs) by increasing the proportion of high-spatial-frequency components in the plasmonic focusing field. We have derived an analytical expression for the angular-dependent contribution of an arbitrary-shaped SPP line source to the focal field and have found that the proportion for high-spatial-frequency components can be significantly increased by launching SPPs from a horizontal line source. Accordingly, we propose a rectangular-groove plasmonic lens (PL) consisting of horizontally-arrayed central grooves and slantingly-arrayed flanking grooves on gold film. We demonstrate both numerically and experimentally that, under linearly polarized illumination, such a PL generates a focal spot of full width half maximum 274 nm at an operating wavelength of 830 nm. The method we describe provides guidance to the further structure design and optimization for plasmonic focusing devices.
NASA Astrophysics Data System (ADS)
Mao, Mingzhi; Qian, Chen; Cao, Bingyao; Zhang, Qianwu; Song, Yingxiong; Wang, Min
2017-09-01
A digital signal process enabled dual-drive Mach-Zehnder modulator (DD-MZM)-based spectral converter is proposed and extensively investigated to realize dynamically reconfigurable and high transparent spectral conversion. As another important innovation point of the paper, to optimize the converter performance, the optimum operation conditions of the proposed converter are deduced, statistically simulated, and experimentally verified. The optimum conditions supported-converter performances are verified by detail numerical simulations and experiments in intensity-modulation and direct-detection-based network in terms of frequency detuning range-dependent conversion efficiency, strict operation transparency for user signal characteristics, impact of parasitic components on the conversion performance, as well as the converted component waveform are almost nondistortion. It is also found that the converter has the high robustness to the input signal power, optical signal-to-noise ratio variations, extinction ratio, and driving signal frequency.
Advanced Modeling Strategies for the Analysis of Tile-Reinforced Composite Armor
NASA Technical Reports Server (NTRS)
Davila, Carlos G.; Chen, Tzi-Kang
1999-01-01
A detailed investigation of the deformation mechanisms in tile-reinforced armored components was conducted to develop the most efficient modeling strategies for the structural analysis of large components of the Composite Armored Vehicle. The limitations of conventional finite elements with respect to the analysis of tile-reinforced structures were examined, and two complementary optimal modeling strategies were developed. These strategies are element layering and the use of a tile-adhesive superelement. Element layering is a technique that uses stacks of shear deformable shell elements to obtain the proper transverse shear distributions through the thickness of the laminate. The tile-adhesive superelement consists of a statically condensed substructure model designed to take advantage of periodicity in tile placement patterns to eliminate numerical redundancies in the analysis. Both approaches can be used simultaneously to create unusually efficient models that accurately predict the global response by incorporating the correct local deformation mechanisms.
NASA Astrophysics Data System (ADS)
Xu, Xue-song
2014-12-01
Under complex currents, the motion governing equations of marine cables are complex and nonlinear, and the calculations of cable configuration and tension become difficult compared with those under the uniform or simple currents. To obtain the numerical results, the usual Newton-Raphson iteration is often adopted, but its stability depends on the initial guessed solution to the governing equations. To improve the stability of numerical calculation, this paper proposed separated the particle swarm optimization, in which the variables are separated into several groups, and the dimension of search space is reduced to facilitate the particle swarm optimization. Via the separated particle swarm optimization, these governing nonlinear equations can be solved successfully with any initial solution, and the process of numerical calculation is very stable. For the calculations of cable configuration and tension of marine cables under complex currents, the proposed separated swarm particle optimization is more effective than the other particle swarm optimizations.
Wang, Peng; Zhu, Zhouquan; Huang, Shuai
2013-01-01
This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions.
Zhu, Zhouquan
2013-01-01
This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879
NASA Astrophysics Data System (ADS)
Wang, Zuo-Cai; Xin, Yu; Ren, Wei-Xin
2016-08-01
This paper proposes a new nonlinear joint model updating method for shear type structures based on the instantaneous characteristics of the decomposed structural dynamic responses. To obtain an accurate representation of a nonlinear system's dynamics, the nonlinear joint model is described as the nonlinear spring element with bilinear stiffness. The instantaneous frequencies and amplitudes of the decomposed mono-component are first extracted by the analytical mode decomposition (AMD) method. Then, an objective function based on the residuals of the instantaneous frequencies and amplitudes between the experimental structure and the nonlinear model is created for the nonlinear joint model updating. The optimal values of the nonlinear joint model parameters are obtained by minimizing the objective function using the simulated annealing global optimization method. To validate the effectiveness of the proposed method, a single-story shear type structure subjected to earthquake and harmonic excitations is simulated as a numerical example. Then, a beam structure with multiple local nonlinear elements subjected to earthquake excitation is also simulated. The nonlinear beam structure is updated based on the global and local model using the proposed method. The results show that the proposed local nonlinear model updating method is more effective for structures with multiple local nonlinear elements. Finally, the proposed method is verified by the shake table test of a real high voltage switch structure. The accuracy of the proposed method is quantified both in numerical and experimental applications using the defined error indices. Both the numerical and experimental results have shown that the proposed method can effectively update the nonlinear joint model.
Shape optimization of tibial prosthesis components
NASA Technical Reports Server (NTRS)
Saravanos, D. A.; Mraz, P. J.; Davy, D. T.
1993-01-01
NASA technology and optimal design methodologies originally developed for the optimization of composite structures (engine blades) are adapted and applied to the optimization of orthopaedic knee implants. A method is developed enabling the shape tailoring of the tibial components of a total knee replacement implant for optimal interaction within the environment of the tibia. The shape of the implant components are optimized such that the stresses in the bone are favorably controlled to minimize bone degradation, to improve the mechanical integrity of the implant/interface/bone system, and to prevent failures of the implant components. A pilot tailoring system is developed and the feasibility of the concept is demonstrated and evaluated. The methodology and evolution of the existing aerospace technology from which this pilot optimization code was developed is also presented and discussed. Both symmetric and unsymmetric in-plane loading conditions are investigated. The results of the optimization process indicate a trend toward wider and tapered posts as well as thicker backing trays. Unique component geometries were obtained for the different load cases.
Smart algorithms and adaptive methods in computational fluid dynamics
NASA Astrophysics Data System (ADS)
Tinsley Oden, J.
1989-05-01
A review is presented of the use of smart algorithms which employ adaptive methods in processing large amounts of data in computational fluid dynamics (CFD). Smart algorithms use a rationally based set of criteria for automatic decision making in an attempt to produce optimal simulations of complex fluid dynamics problems. The information needed to make these decisions is not known beforehand and evolves in structure and form during the numerical solution of flow problems. Once the code makes a decision based on the available data, the structure of the data may change, and criteria may be reapplied in order to direct the analysis toward an acceptable end. Intelligent decisions are made by processing vast amounts of data that evolve unpredictably during the calculation. The basic components of adaptive methods and their application to complex problems of fluid dynamics are reviewed. The basic components of adaptive methods are: (1) data structures, that is what approaches are available for modifying data structures of an approximation so as to reduce errors; (2) error estimation, that is what techniques exist for estimating error evolution in a CFD calculation; and (3) solvers, what algorithms are available which can function in changing meshes. Numerical examples which demonstrate the viability of these approaches are presented.
Vortex generator design for aircraft inlet distortion as a numerical optimization problem
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Levy, Ralph
1991-01-01
Aerodynamic compatibility of aircraft/inlet/engine systems is a difficult design problem for aircraft that must operate in many different flight regimes. Takeoff, subsonic cruise, supersonic cruise, transonic maneuvering, and high altitude loiter each place different constraints on inlet design. Vortex generators, small wing like sections mounted on the inside surfaces of the inlet duct, are used to control flow separation and engine face distortion. The design of vortex generator installations in an inlet is defined as a problem addressable by numerical optimization techniques. A performance parameter is suggested to account for both inlet distortion and total pressure loss at a series of design flight conditions. The resulting optimization problem is difficult since some of the design parameters take on integer values. If numerical procedures could be used to reduce multimillion dollar development test programs to a small set of verification tests, numerical optimization could have a significant impact on both cost and elapsed time to design new aircraft.
NASA Astrophysics Data System (ADS)
Crnomarkovic, Nenad; Belosevic, Srdjan; Tomanovic, Ivan; Milicevic, Aleksandar
2017-12-01
The effects of the number of significant figures (NSF) in the interpolation polynomial coefficients (IPCs) of the weighted sum of gray gases model (WSGM) on results of numerical investigations and WSGM optimization were investigated. The investigation was conducted using numerical simulations of the processes inside a pulverized coal-fired furnace. The radiative properties of the gas phase were determined using the simple gray gas model (SG), two-term WSGM (W2), and three-term WSGM (W3). Ten sets of the IPCs with the same NSF were formed for every weighting coefficient in both W2 and W3. The average and maximal relative difference values of the flame temperatures, wall temperatures, and wall heat fluxes were determined. The investigation showed that the results of numerical investigations were affected by the NSF unless it exceeded certain value. The increase in the NSF did not necessarily lead to WSGM optimization. The combination of the NSF (CNSF) was the necessary requirement for WSGM optimization.
Model parameter-related optimal perturbations and their contributions to El Niño prediction errors
NASA Astrophysics Data System (ADS)
Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua
2018-04-01
Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.
Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Modelling of a Solar Thermal Power Plant for Benchmarking Blackbox Optimization Solvers
NASA Astrophysics Data System (ADS)
Lemyre Garneau, Mathieu
A new family of problems is provided to serve as a benchmark for blackbox optimization solvers. The problems are single or bi-objective and vary in complexity in terms of the number of variables used (from 5 to 29), the type of variables (integer, real, category), the number of constraints (from 5 to 17) and their types (binary or continuous). In order to provide problems exhibiting dynamics that reflect real engineering challenges, they are extracted from an original numerical model of a concentrated solar power (CSP) power plant with molten salt thermal storage. The model simulates the performance of the power plant by using a high level modeling of each of its main components, namely, an heliostats field, a central cavity receiver, a molten salt heat storage, a steam generator and an idealized powerblock. The heliostats field layout is determined through a simple automatic strategy that finds the best individual positions on the field by considering their respective cosine efficiency, atmospheric scattering and spillage losses as a function of the design parameters. A Monte-Carlo integral method is used to evaluate the heliostats field's optical performance throughout the day so that shadowing effects between heliostats are considered, and the results of this evaluation provide the inputs to simulate the levels and temperatures of the thermal storage. The molten salt storage inventory is used to transfer thermal energy to the powerblock, which simulates a simple Rankine cycle with a single steam turbine. Auxiliary models are used to provide additional optimization constraints on the investment cost, parasitic losses or components failure. The results of preliminary optimizations performed with the NOMAD software using default settings are provided to show the validity of the problems.
AI/OR computational model for integrating qualitative and quantitative design methods
NASA Technical Reports Server (NTRS)
Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor
1990-01-01
A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.
Dual-mode nested search method for categorical uncertain multi-objective optimization
NASA Astrophysics Data System (ADS)
Tang, Long; Wang, Hu
2016-10-01
Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.
Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions
NASA Technical Reports Server (NTRS)
Cohn, S.; Isaacson, E.; Ghil, M.
1981-01-01
The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.
NASA Astrophysics Data System (ADS)
Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue
2018-01-01
An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.
NASA Astrophysics Data System (ADS)
Ye, Ming; Li, Yun; He, Yongning; Daneshmand, Mojgan
2017-05-01
With the development of space technology, microwave components with increased power handling capability and reduced weight have been urgently required. In this work, the perforated waveguide technology is proposed to suppress the multipactor effect of high power microwave components. Meanwhile, this novel method has the advantage of reducing components' weight, which makes it to have great potential in space applications. The perforated part of the waveguide components can be seen as an electron absorber (namely, its total electron emission yield is zero) since most of the electrons impacting on this part will go out of the components. Based on thoroughly benchmarked numerical simulation procedures, we simulated an S band and an X band waveguide transformer to conceptually verify this idea. Both electron dynamic simulations and electrical loss simulations demonstrate that the perforation technology can improve the multipactor threshold at least ˜8 dB while maintaining the acceptable insertion loss level compared with its un-perforated components. We also found that the component with larger minimum gap is easier to achieve multipactor suppression. This effect is interpreted by a parallel plate waveguide model. What's more, to improve the multipactor threshold of the X band waveguide transformer with a minimum gap of ˜0.1 mm, we proposed a perforation structure with the slope edge and explained its mechanism. Future study will focus on further optimization of the perforation structure, size, and distribution to maximize the comprehensive performances of microwave components.
NASA Technical Reports Server (NTRS)
Sreekanta Murthy, T.
1992-01-01
Results of the investigation of formal nonlinear programming-based numerical optimization techniques of helicopter airframe vibration reduction are summarized. The objective and constraint function and the sensitivity expressions used in the formulation of airframe vibration optimization problems are presented and discussed. Implementation of a new computational procedure based on MSC/NASTRAN and CONMIN in a computer program system called DYNOPT for optimizing airframes subject to strength, frequency, dynamic response, and dynamic stress constraints is described. An optimization methodology is proposed which is thought to provide a new way of applying formal optimization techniques during the various phases of the airframe design process. Numerical results obtained from the application of the DYNOPT optimization code to a helicopter airframe are discussed.
Numerical modeling of a vortex stabilized arcjet
NASA Astrophysics Data System (ADS)
Pawlas, Gary Edward
Arcjet thrusters are being actively considered for use in Earth orbit maneuvering applications. Satellite station-keeping is an example of a maneuvering application requiring the low thrust, high specific impulse of an arcjet. Experimental studies are currently the chief means of determining an optimal thruster configuration. Earlier numerical studies have failed to include all of the effects found in typical arcjets including complex geometries, viscosity and swirling flow. Arcjet geometries are large area ratio converging-diverging nozzles with centerbodies in the subsonic portion of the nozzle. The nozzle walls serve as the anode while the centerbody functions as the cathode. Viscous effects are important because the Reynolds number, based on the throat radius, is typically less than 1,000. Experimental studies have shown a swirl or circumferential velocity component stabilizes a constricted arc. The equations are described which governs the flow through a constricted arcjet thruster. An assumption that the flowfield is in local thermodynamic equilibrium leads to a single fluid plasma temperature model. An order of magnitude analysis reveals the governing fluid mechanics equations are uncoupled from the electromagnetic field equations. A numerical method is developed to solve the governing fluid mechanics equations, the Thin Layer Navier-Stokes equations. A coordinate transformation is used in deriving the governing equations to simplify the application of boundary conditions in complex geometries. An axisymmetric formulation is employed to include the swirl velocity component as well as the axial and redial velocity components. The numerical method is an implicit finite-volume technique and allows for large time steps to reach a converged steady-state solution. The inviscid fluxes are flux-split and Gauss-Seidel line relaxation is used to accelerate convergence. 'Converging diverging' nozzles with exit-to-throat area ratios up to 100:1 and annual nozzles were examined. Comparisons with experimental data and previous numerical results were in excellent agreement. Quantities examined included Mach number and static wall pressure distributions, and oblique shock structures.
Numerical Modeling of a Vortex Stabilized Arcjet. Ph.D. Thesis, 1991 Final Report
NASA Technical Reports Server (NTRS)
Pawlas, Gary E.
1992-01-01
Arcjet thrusters are being actively considered for use in Earth orbit maneuvering applications. Experimental studies are currently the chief means of determining an optimal thruster configuration. Earlier numerical studies have failed to include all of the effects found in typical arcjets including complex geometries, viscosity, and swirling flow. Arcjet geometries are large area ratio converging nozzles with centerbodies in the subsonic portion of the nozzle. The nozzle walls serve as the anode while the centerbody functions as the cathode. Viscous effects are important because the Reynolds number, based on the throat radius, is typically less than 1,000. Experimental studies have shown that a swirl or circumferential velocity component stabilizes a constricted arc. This dissertation describes the equations governing flow through a constricted arcjet thruster. An assumption that the flowfield is in local thermodynamic equilibrium leads to a single fluid plasma temperature model. An order of magnitude analysis reveals the governing fluid mechanics equations are uncoupled from the electromagnetic field equations. A numerical method is developed to solve the governing fluid mechanics equations, the Thin Layer Navier-Stokes equations. A coordinate transformation is employed in deriving the governing equations to simplify the application of boundary conditions in complex geometries. An axisymmetric formulation is employed to include the swirl velocity component as well as the axial and radial velocity components. The numerical method is an implicit finite-volume technique and allows for large time steps to reach a converged steady-state solution. The inviscid fluxes are flux-split, and Gauss-Seidel line relaxation is used to accelerate convergence. Converging-diverging nozzles with exit-to-throat area ratios up to 100:1 and annular nozzles were examined. Quantities examined included Mach number and static wall pressure distributions, and oblique shock structures. As the level of swirl and viscosity in the flowfield increased the mass flow rate and thrust decreased. The technique was used to predict the flow through a typical arcjet thruster geometry. Results indicate swirl and viscosity play an important role in the complex geometry of an arcjet.
Numerical modeling of a vortex stabilized arcjet
NASA Astrophysics Data System (ADS)
Pawlas, Gary E.
1992-11-01
Arcjet thrusters are being actively considered for use in Earth orbit maneuvering applications. Experimental studies are currently the chief means of determining an optimal thruster configuration. Earlier numerical studies have failed to include all of the effects found in typical arcjets including complex geometries, viscosity, and swirling flow. Arcjet geometries are large area ratio converging nozzles with centerbodies in the subsonic portion of the nozzle. The nozzle walls serve as the anode while the centerbody functions as the cathode. Viscous effects are important because the Reynolds number, based on the throat radius, is typically less than 1,000. Experimental studies have shown that a swirl or circumferential velocity component stabilizes a constricted arc. This dissertation describes the equations governing flow through a constricted arcjet thruster. An assumption that the flowfield is in local thermodynamic equilibrium leads to a single fluid plasma temperature model. An order of magnitude analysis reveals the governing fluid mechanics equations are uncoupled from the electromagnetic field equations. A numerical method is developed to solve the governing fluid mechanics equations, the Thin Layer Navier-Stokes equations. A coordinate transformation is employed in deriving the governing equations to simplify the application of boundary conditions in complex geometries. An axisymmetric formulation is employed to include the swirl velocity component as well as the axial and radial velocity components. The numerical method is an implicit finite-volume technique and allows for large time steps to reach a converged steady-state solution. The inviscid fluxes are flux-split, and Gauss-Seidel line relaxation is used to accelerate convergence. Converging-diverging nozzles with exit-to-throat area ratios up to 100:1 and annular nozzles were examined. Quantities examined included Mach number and static wall pressure distributions, and oblique shock structures. As the level of swirl and viscosity in the flowfield increased the mass flow rate and thrust decreased.
Topology Optimization - Engineering Contribution to Architectural Design
NASA Astrophysics Data System (ADS)
Tajs-Zielińska, Katarzyna; Bochenek, Bogdan
2017-10-01
The idea of the topology optimization is to find within a considered design domain the distribution of material that is optimal in some sense. Material, during optimization process, is redistributed and parts that are not necessary from objective point of view are removed. The result is a solid/void structure, for which an objective function is minimized. This paper presents an application of topology optimization to multi-material structures. The design domain defined by shape of a structure is divided into sub-regions, for which different materials are assigned. During design process material is relocated, but only within selected region. The proposed idea has been inspired by architectural designs like multi-material facades of buildings. The effectiveness of topology optimization is determined by proper choice of numerical optimization algorithm. This paper utilises very efficient heuristic method called Cellular Automata. Cellular Automata are mathematical, discrete idealization of a physical systems. Engineering implementation of Cellular Automata requires decomposition of the design domain into a uniform lattice of cells. It is assumed, that the interaction between cells takes place only within the neighbouring cells. The interaction is governed by simple, local update rules, which are based on heuristics or physical laws. The numerical studies show, that this method can be attractive alternative to traditional gradient-based algorithms. The proposed approach is evaluated by selected numerical examples of multi-material bridge structures, for which various material configurations are examined. The numerical studies demonstrated a significant influence the material sub-regions location on the final topologies. The influence of assumed volume fraction on final topologies for multi-material structures is also observed and discussed. The results of numerical calculations show, that this approach produces different results as compared with classical one-material problems.
NASA Astrophysics Data System (ADS)
Jorris, Timothy R.
2007-12-01
To support the Air Force's Global Reach concept, a Common Aero Vehicle is being designed to support the Global Strike mission. "Waypoints" are specified for reconnaissance or multiple payload deployments and "no-fly zones" are specified for geopolitical restrictions or threat avoidance. Due to time critical targets and multiple scenario analysis, an autonomous solution is preferred over a time-intensive, manually iterative one. Thus, a real-time or near real-time autonomous trajectory optimization technique is presented to minimize the flight time, satisfy terminal and intermediate constraints, and remain within the specified vehicle heating and control limitations. This research uses the Hypersonic Cruise Vehicle (HCV) as a simplified two-dimensional platform to compare multiple solution techniques. The solution techniques include a unique geometric approach developed herein, a derived analytical dynamic optimization technique, and a rapidly emerging collocation numerical approach. This up-and-coming numerical technique is a direct solution method involving discretization then dualization, with pseudospectral methods and nonlinear programming used to converge to the optimal solution. This numerical approach is applied to the Common Aero Vehicle (CAV) as the test platform for the full three-dimensional reentry trajectory optimization problem. The culmination of this research is the verification of the optimality of this proposed numerical technique, as shown for both the two-dimensional and three-dimensional models. Additionally, user implementation strategies are presented to improve accuracy and enhance solution convergence. Thus, the contributions of this research are the geometric approach, the user implementation strategies, and the determination and verification of a numerical solution technique for the optimal reentry trajectory problem that minimizes time to target while satisfying vehicle dynamics and control limitation, and heating, waypoint, and no-fly zone constraints.
Geometric versus numerical optimal control of a dissipative spin-(1/2) particle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapert, M.; Sugny, D.; Zhang, Y.
2010-12-15
We analyze the saturation of a nuclear magnetic resonance (NMR) signal using optimal magnetic fields. We consider both the problems of minimizing the duration of the control and its energy for a fixed duration. We solve the optimal control problems by using geometric methods and a purely numerical approach, the grape algorithm, the two methods being based on the application of the Pontryagin maximum principle. A very good agreement is obtained between the two results. The optimal solutions for the energy-minimization problem are finally implemented experimentally with available NMR techniques.
Optimizing snake locomotion on an inclined plane
NASA Astrophysics Data System (ADS)
Wang, Xiaolin; Osborne, Matthew T.; Alben, Silas
2014-01-01
We develop a model to study the locomotion of snakes on inclined planes. We determine numerically which snake motions are optimal for two retrograde traveling-wave body shapes, triangular and sinusoidal waves, across a wide range of frictional parameters and incline angles. In the regime of large transverse friction coefficients, we find power-law scalings for the optimal wave amplitudes and corresponding costs of locomotion. We give an asymptotic analysis to show that the optimal snake motions are traveling waves with amplitudes given by the same scaling laws found in the numerics.
Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.
2009-01-01
Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.
Three-dimensional shape optimization of a cemented hip stem and experimental validations.
Higa, Masaru; Tanino, Hiromasa; Nishimura, Ikuya; Mitamura, Yoshinori; Matsuno, Takeo; Ito, Hiroshi
2015-03-01
This study proposes novel optimized stem geometry with low stress values in the cement using a finite element (FE) analysis combined with an optimization procedure and experimental measurements of cement stress in vitro. We first optimized an existing stem geometry using a three-dimensional FE analysis combined with a shape optimization technique. One of the most important factors in the cemented stem design is to reduce stress in the cement. Hence, in the optimization study, we minimized the largest tensile principal stress in the cement mantle under a physiological loading condition by changing the stem geometry. As the next step, the optimized stem and the existing stem were manufactured to validate the usefulness of the numerical models and the results of the optimization in vitro. In the experimental study, strain gauges were embedded in the cement mantle to measure the strain in the cement mantle adjacent to the stems. The overall trend of the experimental study was in good agreement with the results of the numerical study, and we were able to reduce the largest stress by more than 50% in both shape optimization and strain gauge measurements. Thus, we could validate the usefulness of the numerical models and the results of the optimization using the experimental models. The optimization employed in this study is a useful approach for developing new stem designs.
Particle swarm optimization of ascent trajectories of multistage launch vehicles
NASA Astrophysics Data System (ADS)
Pontani, Mauro
2014-02-01
Multistage launch vehicles are commonly employed to place spacecraft and satellites in their operational orbits. If the rocket characteristics are specified, the optimization of its ascending trajectory consists of determining the optimal control law that leads to maximizing the final mass at orbit injection. The numerical solution of a similar problem is not trivial and has been pursued with different methods, for decades. This paper is concerned with an original approach based on the joint use of swarming theory and the necessary conditions for optimality. The particle swarm optimization technique represents a heuristic population-based optimization method inspired by the natural motion of bird flocks. Each individual (or particle) that composes the swarm corresponds to a solution of the problem and is associated with a position and a velocity vector. The formula for velocity updating is the core of the method and is composed of three terms with stochastic weights. As a result, the population migrates toward different regions of the search space taking advantage of the mechanism of information sharing that affects the overall swarm dynamics. At the end of the process the best particle is selected and corresponds to the optimal solution to the problem of interest. In this work the three-dimensional trajectory of the multistage rocket is assumed to be composed of four arcs: (i) first stage propulsion, (ii) second stage propulsion, (iii) coast arc (after release of the second stage), and (iv) third stage propulsion. The Euler-Lagrange equations and the Pontryagin minimum principle, in conjunction with the Weierstrass-Erdmann corner conditions, are employed to express the thrust angles as functions of the adjoint variables conjugate to the dynamics equations. The use of these analytical conditions coming from the calculus of variations leads to obtaining the overall rocket dynamics as a function of seven parameters only, namely the unknown values of the initial state and costate components, the coast duration, and the upper stage thrust duration. In addition, a simple approach is introduced and successfully applied with the purpose of satisfying exactly the path constraint related to the maximum dynamical pressure in the atmospheric phase. The basic version of the swarming technique, which is used in this research, is extremely simple and easy to program. Nevertheless, the algorithm proves to be capable of yielding the optimal rocket trajectory with a very satisfactory numerical accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Sirui, E-mail: siruitan@hotmail.com; Huang, Lianjie, E-mail: ljh@lanl.gov
For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within amore » given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.« less
Computational Efficiency of the Simplex Embedding Method in Convex Nondifferentiable Optimization
NASA Astrophysics Data System (ADS)
Kolosnitsyn, A. V.
2018-02-01
The simplex embedding method for solving convex nondifferentiable optimization problems is considered. A description of modifications of this method based on a shift of the cutting plane intended for cutting off the maximum number of simplex vertices is given. These modification speed up the problem solution. A numerical comparison of the efficiency of the proposed modifications based on the numerical solution of benchmark convex nondifferentiable optimization problems is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
We present two efficient iterative algorithms for solving the linear response eigen- value problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into a product eigenvalue problem that is self-adjoint with respect to a K-inner product. This product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-innermore » product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. However, the other component of the eigenvector can be easily recovered in a postprocessing procedure. Therefore, the algorithms we present here are more efficient than existing algorithms that try to approximate both components of the eigenvectors simultaneously. The efficiency of the new algorithms is demonstrated by numerical examples.« less
A comprehensive methodology for the multidimensional and synchronic data collecting in soundscape.
Kogan, Pablo; Turra, Bruno; Arenas, Jorge P; Hinalaf, María
2017-02-15
The soundscape paradigm is comprised of complex living systems where individuals interact moment-by-moment among one another and with the physical environment. The real environments provide promising conditions to reveal deep soundscape behavior, including the multiple components involved and their interrelations as a whole. However, measuring and analyzing the numerous simultaneous variables of soundscape represents a challenge that is not completely understood. This work proposes and applies a comprehensive methodology for multidimensional and synchronic data collection in soundscape. The soundscape variables were organized into three main entities: experienced environment, acoustic environment, and extra-acoustic environment, containing, in turn, subgroups of variables called components. The variables contained in these components were acquired through synchronic field techniques that include surveys, acoustic measurements, audio recordings, photography, and video. The proposed methodology was tested, optimized, and applied in diverse open environments, including squares, parks, fountains, university campuses, streets, and pedestrian areas. The systematization of this comprehensive methodology provides a framework for soundscape research, a support for urban and environment management, and a preliminary procedure for standardization in soundscape data collecting. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimizing communication satellites payload configuration with exact approaches
NASA Astrophysics Data System (ADS)
Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi
2015-12-01
The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.
NASA Astrophysics Data System (ADS)
Akmaev, R. a.
1999-04-01
In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).
Adaptive Numerical Algorithms in Space Weather Modeling
NASA Technical Reports Server (NTRS)
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.;
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.
Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Patera, Anthony
1993-01-01
Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.
NASA Astrophysics Data System (ADS)
da Silva Fernandes, S.; das Chagas Carvalho, F.; Bateli Romão, J. V.
2018-04-01
A numerical-analytical procedure based on infinitesimal canonical transformations is developed for computing optimal time-fixed low-thrust limited power transfers (no rendezvous) between coplanar orbits with small eccentricities in an inverse-square force field. The optimization problem is formulated as a Mayer problem with a set of non-singular orbital elements as state variables. Second order terms in eccentricity are considered in the development of the maximum Hamiltonian describing the optimal trajectories. The two-point boundary value problem of going from an initial orbit to a final orbit is solved by means of a two-stage Newton-Raphson algorithm which uses an infinitesimal canonical transformation. Numerical results are presented for some transfers between circular orbits with moderate radius ratio, including a preliminary analysis of Earth-Mars and Earth-Venus missions.
2018-03-14
pricing, Appl. Math . Comp. Vol.305, 174-187 (2017) 5. W. Li, S. Wang, Pricing European options with proportional transaction costs and stochastic...for fractional differential equation. Numer. Math . Theor. Methods Appl. 5, 229–241, 2012. [23] Kilbas A.A. and Marzan, S.A., Cauchy problem for...numerical technique for solving fractional optimal control problems, Comput. Math . Appl., 62, Issue 3, 1055–1067, 2011. [26] Lotfi A., Yousefi SA., Dehghan M
Mencarelli, D; Djafari-Rouhani, B; Pennec, Y; Pitanti, A; Zanotto, S; Stocchi, M; Pierantoni, L
2018-06-18
In this contribution, a rigorous numerical calibration is proposed to characterize the excitation of propagating mechanical waves by interdigitated transducers (IDTs). The transition from IDT terminals to phonon waveguides is modeled by means of a general circuit representation that makes use of Scattering Matrix (SM) formalism. In particular, the three-step calibration approach called the Thru-Reflection-Line (TRL), that is a well-established technique in microwave engineering, has been successfully applied to emulate typical experimental conditions. The proposed procedure is suitable for the synthesis/optimization of surface-acoustic-wave (SAW) based devices: the TRL calibration allows to extract/de-embed the acoustic component, namely resonator or filter, from the outer IDT structure, regardless of complexity and size of the letter. We report, as a result, the hybrid scattering parameters of the IDT transition to a mechanical waveguide formed by a phononic crystal patterned on a piezoelectric AlN membrane, where the effect of a discontinuity from periodic to uniform mechanical waveguide is also characterized. In addition, to ensure the correctness of our numerical calculations, the proposed method has been validated by independent calculations.
NASA Astrophysics Data System (ADS)
Hu, Q.; Li, Y.; Pan, H. L.; Liu, J. T.; Zhuang, B. T.
2015-01-01
Vane type propellant management device (PMD) is one of the key components of the vane-type surface tension tank (STT), and its fluid orbital performance directly determines the STT's success or failure. In present paper, numerical analysis and microgravity experiment study on fluid orbital performance of a vane type PMD were carried out. By using two-phase flow model of volume of fluid (VOF), fluid flow characteristics in the tank with the vane type PMD were numerically calculated, and the rules of fluid transfer and distribution were gotten. A abbreviate model test system of the vane type PMD is established and microgravity drop tower tests were performed, then fluid management and transmission rules of the vane type PMD were obtained under microgravity environment. The analysis and tests results show that the vane type PMD has good and initiative fluid orbital management ability and meets the demands of fluid orbital extrusion in the vane type STT. The results offer valuable guidance for the design and optimization of the new generation of vane type PMD, and also provide a new approach for fluid management and control in space environment.
Geometric optimization of thermal systems
NASA Astrophysics Data System (ADS)
Alebrahim, Asad Mansour
2000-10-01
The work in chapter 1 extends to three dimensions and to convective heat transfer the constructal method of minimizing the thermal resistance between a volume and one point. In the first part, the heat flow mechanism is conduction, and the heat generating volume is occupied by low conductivity material (k 0) and high conductivity inserts (kp) that are shaped as constant-thickness disks mounted on a common stem of kp material. In the second part the interstitial spaces once occupied by k0 material are bathed by forced convection. The internal and external geometric aspect ratios of the elemental volume and the first assembly are optimized numerically subject to volume constraints. Chapter 2 presents the constrained thermodynamic optimization of a cross-flow heat exchanger with ram air on the cold side, which is used in the environmental control systems of aircraft. Optimized geometric features such as the ratio of channel spacings and flow lengths are reported. It is found that the optimized features are relatively insensitive to changes in other physical parameters of the installation and relatively insensitive to the additional irreversibility due to discharging the ram-air stream into the atmosphere, emphasizing the robustness of the thermodynamic optimum. In chapter 3 the problem of maximizing exergy extraction from a hot stream by distributing streams over a heat transfer surface is studied. In the first part, the cold stream is compressed in an isothermal compressor, expanded in an adiabatic turbine, and discharged into the ambient. In the second part, the cold stream is compressed in an adiabatic compressor. Both designs are optimized with respect to the capacity-rate imbalance of the counter-flow and the pressure ratio maintained by the compressor. This study shows the tradeoff between simplicity and increased performance, and outlines the path for further conceptual work on the extraction of exergy from a hot stream that is being cooled gradually. The aim of chapter 4 was to optimize the performance of a boot-strap air cycle of an environmental control system (ECS) for aircraft. New in the present study was that the optimization refers to the performance of the entire ECS system, not to the performance of an individual component. Also, there were two heat exchangers, not one, and their relative positions and sizes were not specified in advance. This study showed that geometric optimization can be identified when the optimization procedure refers to the performance of the entire ECS system, not to the performance of an individual component. This optimized features were robust relative to some physical parameters. This robustness may be used to simplify future optimization of similar systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Surzhikov, S.T.
1996-12-31
Two-dimensional radiative gas dynamics model for numerical simulation of oxygen-hydrogen fire ball which may be generated by an explosion of a launch vehicle with cryogenic (LO{sub 2}-LH{sub 2}) fuel components is presented. The following physical-chemical processes are taken into account in the numerical model: and effective chemical reaction between the gaseous components (O{sub 2}-H{sub 2}) of the propellant, turbulent mixing and diffusion of the components, and radiative heat transfer. The results of numerical investigations of the following problems are presented: The influence of radiative heat transfer on fire ball gas dynamics during the first 13 sec after explosion, the effectmore » of the fuel gaseous components afterburning on fire ball gas dynamics, and the effect of turbulence on fire ball gas dynamics (in a framework of algebraic model of turbulent mixing).« less
NASA Astrophysics Data System (ADS)
Bottasso, C. L.; Croce, A.; Riboldi, C. E. D.
2014-06-01
The paper presents a novel approach for the synthesis of the open-loop pitch profile during emergency shutdowns. The problem is of interest in the design of wind turbines, as such maneuvers often generate design driving loads on some of the machine components. The pitch profile synthesis is formulated as a constrained optimal control problem, solved numerically using a direct single shooting approach. A cost function expressing a compromise between load reduction and rotor overspeed is minimized with respect to the unknown blade pitch profile. Constraints may include a load reduction not-to-exceed the next dominating loads, a not-to-be-exceeded maximum rotor speed, and a maximum achievable blade pitch rate. Cost function and constraints are computed over a possibly large number of operating conditions, defined so as to cover as well as possible the operating situations encountered in the lifetime of the machine. All such conditions are simulated by using a high-fidelity aeroservoelastic model of the wind turbine, ensuring the accuracy of the evaluation of all relevant parameters. The paper demonstrates the capabilities of the novel proposed formulation, by optimizing the pitch profile of a multi-MW wind turbine. Results show that the procedure can reliably identify optimal pitch profiles that reduce design-driving loads, in a fully automated way.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Hierarchy in directed random networks.
Mones, Enys
2013-02-01
In recent years, the theory and application of complex networks have been quickly developing in a markable way due to the increasing amount of data from real systems and the fruitful application of powerful methods used in statistical physics. Many important characteristics of social or biological systems can be described by the study of their underlying structure of interactions. Hierarchy is one of these features that can be formulated in the language of networks. In this paper we present some (qualitative) analytic results on the hierarchical properties of random network models with zero correlations and also investigate, mainly numerically, the effects of different types of correlations. The behavior of the hierarchy is different in the absence and the presence of giant components. We show that the hierarchical structure can be drastically different if there are one-point correlations in the network. We also show numerical results suggesting that the hierarchy does not change monotonically with the correlations and there is an optimal level of nonzero correlations maximizing the level of hierarchy.
Numerical study of a cryogen-free vuilleumier type pulse tube cryocooler operating below 10 K
NASA Astrophysics Data System (ADS)
Wang, Y. N.; Wang, X. T.; Dai, W.; Luo, E. C.
2017-12-01
This paper presents a numerical investigation on a Vuilleumier (VM) type pulse tube cooler. Different from previous systems that use liquid nitrogen, Stirling type pre-coolers are used to provide the cooling power for the thermal compressor, which leads to a convenient cryogen-free system and offers the flexibility of changing working temperature range of the thermal compressor to obtain an optimum efficiency. Firstly, main component dimensions were optimized with lowest no-load temperature as the target. Then the dependence of system performance on average pressure, frequency, displacer displacement amplitude and thermal compressor pre-cooling temperature were studied. Finally, the effect of pre-cooling temperature on overall cooling efficiency at 5 K was studied. A highest relative Carnot efficiency of 0.82 % was predicted with an average pressure of 2.5 MPa, a frequency of 3 Hz, a displacer displacement amplitude of 6.5 mm, ambient end temperature 300 K and pre-cooling temperature 65 K, respectively.
NASA Astrophysics Data System (ADS)
Li, Xin; Hong, Yifeng; Wang, Jinfang; Liu, Yang; Sun, Xun; Li, Mi
2018-01-01
Numerous communication techniques and optical devices successfully applied in space optical communication system indicates a good portability of it. With this good portability, typical coherent demodulation technique of Costas loop can be easily adopted in space optical communication system. As one of the components of pointing error, the effect of jitter plays an important role in the communication quality of such system. Here, we obtain the probability density functions (PDF) of different jitter degrees and explain their essential effect on the bit error rate (BER) space optical communication system. Also, under the effect of jitter, we research the bit error rate of space coherent optical communication system using Costas loop with different system parameters of transmission power, divergence angle, receiving diameter, avalanche photodiode (APD) gain, and phase deviation caused by Costas loop. Through a numerical simulation of this kind of communication system, we demonstrate the relationship between the BER and these system parameters, and some corresponding methods of system optimization are presented to enhance the communication quality.
NASA Astrophysics Data System (ADS)
Feng, Xinzeng; Hormuth, David A.; Yankeelov, Thomas E.
2018-06-01
We present an efficient numerical method to quantify the spatial variation of glioma growth based on subject-specific medical images using a mechanically-coupled tumor model. The method is illustrated in a murine model of glioma in which we consider the tumor as a growing elastic mass that continuously deforms the surrounding healthy-appearing brain tissue. As an inverse parameter identification problem, we quantify the volumetric growth of glioma and the growth component of deformation by fitting the model predicted cell density to the cell density estimated using the diffusion-weighted magnetic resonance imaging data. Numerically, we developed an adjoint-based approach to solve the optimization problem. Results on a set of experimentally measured, in vivo rat glioma data indicate good agreement between the fitted and measured tumor area and suggest a wide variation of in-plane glioma growth with the growth-induced Jacobian ranging from 1.0 to 6.0.
Solving traveling salesman problems with DNA molecules encoding numerical values.
Lee, Ji Youn; Shin, Soo-Yong; Park, Tai Hyun; Zhang, Byoung-Tak
2004-12-01
We introduce a DNA encoding method to represent numerical values and a biased molecular algorithm based on the thermodynamic properties of DNA. DNA strands are designed to encode real values by variation of their melting temperatures. The thermodynamic properties of DNA are used for effective local search of optimal solutions using biochemical techniques, such as denaturation temperature gradient polymerase chain reaction and temperature gradient gel electrophoresis. The proposed method was successfully applied to the traveling salesman problem, an instance of optimization problems on weighted graphs. This work extends the capability of DNA computing to solving numerical optimization problems, which is contrasted with other DNA computing methods focusing on logical problem solving.
Optimization benefits analysis in production process of fabrication components
NASA Astrophysics Data System (ADS)
Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.
2017-12-01
The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.
NASA Astrophysics Data System (ADS)
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
NASA Technical Reports Server (NTRS)
Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.
2007-01-01
Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.
The optimal dynamic immunization under a controlled heterogeneous node-based SIRS model
NASA Astrophysics Data System (ADS)
Yang, Lu-Xing; Draief, Moez; Yang, Xiaofan
2016-05-01
Dynamic immunizations, under which the state of the propagation network of electronic viruses can be changed by adjusting the control measures, are regarded as an alternative to static immunizations. This paper addresses the optimal dynamical immunization under the widely accepted SIRS assumption. First, based on a controlled heterogeneous node-based SIRS model, an optimal control problem capturing the optimal dynamical immunization is formulated. Second, the existence of an optimal dynamical immunization scheme is shown, and the corresponding optimality system is derived. Next, some numerical examples are given to show that an optimal immunization strategy can be worked out by numerically solving the optimality system, from which it is found that the network topology has a complex impact on the optimal immunization strategy. Finally, the difference between a payoff and the minimum payoff is estimated in terms of the deviation of the corresponding immunization strategy from the optimal immunization strategy. The proposed optimal immunization scheme is justified, because it can achieve a low level of infections at a low cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre
Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy,more » and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics (which we discussed in [1]) where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel. We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.« less
Advanced rotorcraft control using parameter optimization
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1991-01-01
A reliable algorithm for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters is presented. The algorithm is part of a design algorithm for an optimal linear dynamic output feedback controller that minimizes a finite time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed loop eigensystem. This approach through the use of a accurate Pade series approximation does not require the closed loop system matrix to be diagonalizable. The algorithm has been included in a control design package for optimal robust low order controllers. Usefulness of the proposed numerical algorithm has been demonstrated using numerous practical design cases where degeneracies occur frequently in the closed loop system under an arbitrary controller design initialization and during the numerical search.
An optimal control method for fluid structure interaction systems via adjoint boundary pressure
NASA Astrophysics Data System (ADS)
Chirco, L.; Da Vià, R.; Manservisi, S.
2017-11-01
In recent year, in spite of the computational complexity, Fluid-structure interaction (FSI) problems have been widely studied due to their applicability in science and engineering. Fluid-structure interaction systems consist of one or more solid structures that deform by interacting with a surrounding fluid flow. FSI simulations evaluate the tensional state of the mechanical component and take into account the effects of the solid deformations on the motion of the interior fluids. The inverse FSI problem can be described as the achievement of a certain objective by changing some design parameters such as forces, boundary conditions and geometrical domain shapes. In this paper we would like to study the inverse FSI problem by using an optimal control approach. In particular we propose a pressure boundary optimal control method based on Lagrangian multipliers and adjoint variables. The objective is the minimization of a solid domain displacement matching functional obtained by finding the optimal pressure on the inlet boundary. The optimality system is derived from the first order necessary conditions by taking the Fréchet derivatives of the Lagrangian with respect to all the variables involved. The optimal solution is then obtained through a standard steepest descent algorithm applied to the optimality system. The approach presented in this work is general and could be used to assess other objective functionals and controls. In order to support the proposed approach we perform a few numerical tests where the fluid pressure on the domain inlet controls the displacement that occurs in a well defined region of the solid domain.
Ordinal optimization and its application to complex deterministic problems
NASA Astrophysics Data System (ADS)
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Parallel Aircraft Trajectory Optimization with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Gray, Justin S.; Naylor, Bret
2016-01-01
Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.
Sikorsky interactive graphics surface design/manufacturing system
NASA Technical Reports Server (NTRS)
Robbins, R.
1975-01-01
An interactive graphics system conceived to be used in the design, analysis, and manufacturing of aircraft components with free form surfaces was described. In addition to the basic surface definition and viewing capabilities inherent in such a system, numerous other features are present: surface editing, automated smoothing of control curves, variable milling patch boundary definitions, surface intersection definition and viewing, automatic creation of true offset surfaces, digitizer and drafting machine interfaces, and cutter path optimization. Documented costs and time savings of better than six to one are being realized with this system. The system was written in FORTRAN and GSP for use on IBM 2250 CRT's in conjunction with an IBM 370/158 computer.
The role of nutrition and nutraceutical supplements in the treatment of hypertension
Houston, Mark
2014-01-01
Vascular biology, endothelial and vascular smooth muscle and cardiac dysfunction play a primary role in the initiation and perpetuation of hypertension, cardiovascular disease and target organ damage. Nutrient-gene interactions and epigenetics are predominant factors in promoting beneficial or detrimental effects in cardiovascular health and hypertension. Macronutrients and micronutrients can prevent, control and treat hypertension through numerous mechanisms related to vascular biology. Oxidative stress, inflammation and autoimmune dysfunction initiate and propagate hypertension and cardiovascular disease. There is a role for the selected use of single and component nutraceutical supplements, vitamins, antioxidants and minerals in the treatment of hypertension based on scientifically controlled studies which complement optimal nutrition, coupled with other lifestyle modifications. PMID:24575172
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raspa, V.; Moreno, C.; Sigaut, L.
The effective spectrum of the hard x-ray output of a Mather-type tabletop plasma focus device was determined from attenuation data on metallic samples using commercial radiographic film coupled to a Gd{sub 2}O{sub 2}S:Tb phosphor intensifier screen. It was found that the radiation has relevant spectral components in the 40-150 keV range, with a single maximum around 60-80 keV. The radiation output allows for 50 ns resolution, good contrast, and introspective imaging of metallic objects even through metallic walls. A numerical estimation of the induced voltage on the focus during the compressional stage is briefly discussed.
Overview: Applications of numerical optimization methods to helicopter design problems
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
There are a number of helicopter design problems that are well suited to applications of numerical design optimization techniques. Adequate implementation of this technology will provide high pay-offs. There are a number of numerical optimization programs available, and there are many excellent response/performance analysis programs developed or being developed. But integration of these programs in a form that is usable in the design phase should be recognized as important. It is also necessary to attract the attention of engineers engaged in the development of analysis capabilities and to make them aware that analysis capabilities are much more powerful if integrated into design oriented codes. Frequently, the shortcoming of analysis capabilities are revealed by coupling them with an optimization code. Most of the published work has addressed problems in preliminary system design, rotor system/blade design or airframe design. Very few published results were found in acoustics, aerodynamics and control system design. Currently major efforts are focused on vibration reduction, and aerodynamics/acoustics applications appear to be growing fast. The development of a computer program system to integrate the multiple disciplines required in helicopter design with numerical optimization technique is needed. Activities in Britain, Germany and Poland are identified, but no published results from France, Italy, the USSR or Japan were found.
NASA Astrophysics Data System (ADS)
Lakshminarayana, B.; Ho, Y.; Basson, A.
1993-07-01
The objective of this research is to simulate steady and unsteady viscous flows, including rotor/stator interaction and tip clearance effects in turbomachinery. The numerical formulation for steady flow developed here includes an efficient grid generation scheme, particularly suited to computational grids for the analysis of turbulent turbomachinery flows and tip clearance flows, and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation, and is applicable to both viscous and inviscid flows. The values of these artificial dissipation is optimized to achieve accuracy and convergency in the solution. The numerical model is used to investigate the structure of tip clearance flows in a turbine nozzle. The structure of leakage flow is captured accurately, including blade-to-blade variation of all three velocity components, pitch and yaw angles, losses and blade static pressures in the tip clearance region. The simulation also includes evaluation of such quantities of leakage mass flow, vortex strength, losses, dominant leakage flow regions and the spanwise extent affected by the leakage flow. It is demonstrated, through optimization of grid size and artificial dissipation, that the tip clearance flow field can be captured accurately. The above numerical formulation was modified to incorporate time accurate solutions. An inner loop iteration scheme is used at each time step to account for the non-linear effects. The computation of unsteady flow through a flat plate cascade subjected to a transverse gust reveals that the choice of grid spacing and the amount of artificial dissipation is critical for accurate prediction of unsteady phenomena. The rotor-stator interaction problem is simulated by starting the computation upstream of the stator, and the upstream rotor wake is specified from the experimental data. The results show that the stator potential effects have appreciable influence on the upstream rotor wake. The predicted unsteady wake profiles are compared with the available experimental data and the agreement is good. The numerical results are interpreted to draw conclusions on the unsteady wake transport mechanism in the blade passage.
Neighboring extremal optimal control design including model mismatch errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T.J.; Hull, D.G.
1994-11-01
The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.
Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio
2011-12-01
This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).
NASA Astrophysics Data System (ADS)
Yu, Long; Druckenbrod, Markus; Greve, Martin; Wang, Ke-qi; Abdel-Maksoud, Moustafa
2015-10-01
A fully automated optimization process is provided for the design of ducted propellers under open water conditions, including 3D geometry modeling, meshing, optimization algorithm and CFD analysis techniques. The developed process allows the direct integration of a RANSE solver in the design stage. A practical ducted propeller design case study is carried out for validation. Numerical simulations and open water tests are fulfilled and proved that the optimum ducted propeller improves hydrodynamic performance as predicted.
Study of the Polarization Strategy for Electron Cyclotron Heating Systems on HL-2M
NASA Astrophysics Data System (ADS)
Zhang, F.; Huang, M.; Xia, D. H.; Song, S. D.; Wang, J. Q.; Huang, B.; Wang, H.
2016-06-01
As important components integrated in transmission lines of electron cyclotron heating systems, polarizers are mainly used to obtain the desired polarization for highly efficient coupling between electron cyclotron waves and plasma. The polarization strategy for 105-GHz electron cyclotron heating systems of HL-2M tokamak is studied in this paper. Considering the polarizers need high efficiency, stability, and low loss to realize any polarization states, two sinusoidal-grooved polarizers, which include a linear polarizer and an elliptical polarizer, are designed with the coordinate transformation method. The parameters, the period p and the depth d, of two sinusoidal-grooved polarizers are optimized by a phase difference analysis method to achieve an almost arbitrary polarization. Finally, the optimized polarizers are manufactured and their polarization characteristics are tested with a low-power test platform. The experimental results agree well with the numerical calculations, indicating that the designed polarizers can meet the polarization requirements of the electron cyclotron heating systems of HL-2M tokamak.
Optimal frequency-response sensitivity of compressible flow over roughness elements
NASA Astrophysics Data System (ADS)
Fosas de Pando, Miguel; Schmid, Peter J.
2017-04-01
Compressible flow over a flat plate with two localised and well-separated roughness elements is analysed by global frequency-response analysis. This analysis reveals a sustained feedback loop consisting of a convectively unstable shear-layer instability, triggered at the upstream roughness, and an upstream-propagating acoustic wave, originating at the downstream roughness and regenerating the shear-layer instability at the upstream protrusion. A typical multi-peaked frequency response is recovered from the numerical simulations. In addition, the optimal forcing and response clearly extract the components of this feedback loop and isolate flow regions of pronounced sensitivity and amplification. An efficient parametric-sensitivity framework is introduced and applied to the reference case which shows that first-order increases in Reynolds number and roughness height act destabilising on the flow, while changes in Mach number or roughness separation cause corresponding shifts in the peak frequencies. This information is gained with negligible effort beyond the reference case and can easily be applied to more complex flows.
Qubit absorption refrigerator at strong coupling
NASA Astrophysics Data System (ADS)
Mu, Anqi; Agarwalla, Bijay Kumar; Schaller, Gernot; Segal, Dvira
2017-12-01
We demonstrate that a quantum absorption refrigerator (QAR) can be realized from the smallest quantum system, a qubit, by coupling it in a non-additive (strong) manner to three heat baths. This function is un-attainable for the qubit model under the weak system-bath coupling limit, when the dissipation is additive. In an optimal design, the reservoirs are engineered and characterized by a single frequency component. We then obtain closed expressions for the cooling window and refrigeration efficiency, as well as bounds for the maximal cooling efficiency and the efficiency at maximal power. Our results agree with macroscopic designs and with three-level models for QARs, which are based on the weak system-bath coupling assumption. Beyond the optimal limit, we show with analytical calculations and numerical simulations that the cooling efficiency varies in a non-universal manner with model parameters. Our work demonstrates that strongly-coupled quantum machines can exhibit function that is un-attainable under the weak system-bath coupling assumption.
Full Parallel Implementation of an All-Electron Four-Component Dirac-Kohn-Sham Program.
Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Storchi, Loriano
2014-09-09
A full distributed-memory implementation of the Dirac-Kohn-Sham (DKS) module of the program BERTHA (Belpassi et al., Phys. Chem. Chem. Phys. 2011, 13, 12368-12394) is presented, where the self-consistent field (SCF) procedure is replicated on all the parallel processes, each process working on subsets of the global matrices. The key feature of the implementation is an efficient procedure for switching between two matrix distribution schemes, one (integral-driven) optimal for the parallel computation of the matrix elements and another (block-cyclic) optimal for the parallel linear algebra operations. This approach, making both CPU-time and memory scalable with the number of processors used, virtually overcomes at once both time and memory barriers associated with DKS calculations. Performance, portability, and numerical stability of the code are illustrated on the basis of test calculations on three gold clusters of increasing size, an organometallic compound, and a perovskite model. The calculations are performed on a Beowulf and a BlueGene/Q system.
Development of Pelton turbine using numerical simulation
NASA Astrophysics Data System (ADS)
Patel, K.; Patel, B.; Yadav, M.; Foggia, T.
2010-08-01
This paper describes recent research and development activities in the field of Pelton turbine design. Flow inside Pelton turbine is most complex due to multiphase (mixture of air and water) and free surface in nature. Numerical calculation is useful to understand flow physics as well as effect of geometry on flow. The optimized design is obtained using in-house special optimization loop. Either single phase or two phase unsteady numerical calculation could be performed. Numerical results are used to visualize the flow pattern in the water passage and to predict performance of Pelton turbine at full load as well as at part load. Model tests are conducted to determine performance of turbine and it shows good agreement with numerically predicted performance.
Fast Numerical Methods for the Design of Layered Photonic Structures with Rough Interfaces
NASA Technical Reports Server (NTRS)
Komarevskiy, Nikolay; Braginsky, Leonid; Shklover, Valery; Hafner, Christian; Lawson, John
2011-01-01
Modified boundary conditions (MBC) and a multilayer approach (MA) are proposed as fast and efficient numerical methods for the design of 1D photonic structures with rough interfaces. These methods are applicable for the structures, composed of materials with arbitrary permittivity tensor. MBC and MA are numerically validated on different types of interface roughness and permittivities of the constituent materials. The proposed methods can be combined with the 4x4 scattering matrix method as a field solver and an evolutionary strategy as an optimizer. The resulted optimization procedure is fast, accurate, numerically stable and can be used to design structures for various applications.
A practically unconditionally gradient stable scheme for the N-component Cahn-Hilliard system
NASA Astrophysics Data System (ADS)
Lee, Hyun Geun; Choi, Jeong-Whan; Kim, Junseok
2012-02-01
We present a practically unconditionally gradient stable conservative nonlinear numerical scheme for the N-component Cahn-Hilliard system modeling the phase separation of an N-component mixture. The scheme is based on a nonlinear splitting method and is solved by an efficient and accurate nonlinear multigrid method. The scheme allows us to convert the N-component Cahn-Hilliard system into a system of N-1 binary Cahn-Hilliard equations and significantly reduces the required computer memory and CPU time. We observe that our numerical solutions are consistent with the linear stability analysis results. We also demonstrate the efficiency of the proposed scheme with various numerical experiments.
NASA Astrophysics Data System (ADS)
Alsyouf, Imad
2018-05-01
Reliability and availability of critical systems play an important role in achieving the stated objectives of engineering assets. Preventive replacement time affects the reliability of the components, thus the number of system failures encountered and its downtime expenses. On the other hand, spare parts inventory level is a very critical factor that affects the availability of the system. Usually, the decision maker has many conflicting objectives that should be considered simultaneously for the selection of the optimal maintenance policy. The purpose of this research was to develop a bi-objective model that will be used to determine the preventive replacement time for three maintenance policies (age, block good as new, block bad as old) with consideration of spare parts’ availability. It was suggested to use a weighted comprehensive criterion method with two objectives, i.e. cost and availability. The model was tested with a typical numerical example. The results of the model demonstrated its effectiveness in enabling the decision maker to select the optimal maintenance policy under different scenarios and taking into account preferences with respect to contradicting objectives such as cost and availability.
Fowler, K. R.; Jenkins, E.W.; Parno, M.; Chrispell, J.C.; Colón, A. I.; Hanson, Randall T.
2016-01-01
The development of appropriate water management strategies requires, in part, a methodology for quantifying and evaluating the impact of water policy decisions on regional stakeholders. In this work, we describe the framework we are developing to enhance the body of resources available to policy makers, farmers, and other community members in their e orts to understand, quantify, and assess the often competing objectives water consumers have with respect to usage. The foundation for the framework is the construction of a simulation-based optimization software tool using two existing software packages. In particular, we couple a robust optimization software suite (DAKOTA) with the USGS MF-OWHM water management simulation tool to provide a flexible software environment that will enable the evaluation of one or multiple (possibly competing) user-defined (or stakeholder) objectives. We introduce the individual software components and outline the communication strategy we defined for the coupled development. We present numerical results for case studies related to crop portfolio management with several defined objectives. The objectives are not optimally satisfied for any single user class, demonstrating the capability of the software tool to aid in the evaluation of a variety of competing interests.
Efficient experimental design of high-fidelity three-qubit quantum gates via genetic programming
NASA Astrophysics Data System (ADS)
Devra, Amit; Prabhu, Prithviraj; Singh, Harpreet; Arvind; Dorai, Kavita
2018-03-01
We have designed efficient quantum circuits for the three-qubit Toffoli (controlled-controlled-NOT) and the Fredkin (controlled-SWAP) gate, optimized via genetic programming methods. The gates thus obtained were experimentally implemented on a three-qubit NMR quantum information processor, with a high fidelity. Toffoli and Fredkin gates in conjunction with the single-qubit Hadamard gates form a universal gate set for quantum computing and are an essential component of several quantum algorithms. Genetic algorithms are stochastic search algorithms based on the logic of natural selection and biological genetics and have been widely used for quantum information processing applications. We devised a new selection mechanism within the genetic algorithm framework to select individuals from a population. We call this mechanism the "Luck-Choose" mechanism and were able to achieve faster convergence to a solution using this mechanism, as compared to existing selection mechanisms. The optimization was performed under the constraint that the experimentally implemented pulses are of short duration and can be implemented with high fidelity. We demonstrate the advantage of our pulse sequences by comparing our results with existing experimental schemes and other numerical optimization methods.
ERIC Educational Resources Information Center
Lappas, Pantelis Z.; Kritikos, Manolis N.
2018-01-01
The main objective of this paper is to propose a didactic framework for teaching Applied Mathematics in higher education. After describing the structure of the framework, several applications of inquiry-based learning in teaching numerical analysis and optimization are provided to illustrate the potential of the proposed framework. The framework…
The roles of exercise and fall risk reduction in the prevention of osteoporosis.
Henderson, N K; White, C P; Eisman, J A
1998-06-01
In summary, the optimal model for the prevention of osteoporotic fractures includes maximization and maintenance of bone strength and minimization of trauma. Numerous determinants of each have been identified, but further work to develop preventative strategies based on these determinants remains to be undertaken. Physical activity is a determinant of peak BMD. There also is evidence that activity during growth modulates the external geometry and trabecular architecture, potentially enhancing skeletal strength, while during the adult years activity may reduce age-related bone loss. The magnitude of the effect of a 7% to 8% increase in peak BMD, if maintained through the adult years, could translate to a 1.5-fold reduction in fracture risk. Moreover, in the older population, appropriate forms of exercise could reduce the risk of falling and, thus, further reduce fracture risk. These data must be considered as preliminary in view of the paucity of long-term fracture outcome data from randomized clinical trials. However, current information suggests that the optimal form of exercise to achieve these objectives may vary through life. Vigorous physical activity (including weight-bearing, resistance, and impact components) during childhood may maximize peak BMD. This type of activity seems optimal through the young adult years, but as inevitable age-related degeneration occurs, activity modification to limit the impact component of exercise may become necessary. In the elderly, progressive strength training has been demonstrated to be a safe and effective form of exercise that reduces risk factors for falling and may also enhance BMD. In the frail elderly, activity to improve balance and confidence also may be valuable. Group activities such as Tai Chi may be cost-effective. Precise prescriptions must await the outcome of well-designed, controlled longitudinal studies that include fracture as an outcome. However, increased physical activity seems to be a sensible component of strategies to reduce osteoporotic fracture.
NASA Astrophysics Data System (ADS)
Chiu, Y.; Nishikawa, T.
2013-12-01
With the increasing complexity of parameter-structure identification (PSI) in groundwater modeling, there is a need for robust, fast, and accurate optimizers in the groundwater-hydrology field. For this work, PSI is defined as identifying parameter dimension, structure, and value. In this study, Voronoi tessellation and differential evolution (DE) are used to solve the optimal PSI problem. Voronoi tessellation is used for automatic parameterization, whereby stepwise regression and the error covariance matrix are used to determine the optimal parameter dimension. DE is a novel global optimizer that can be used to solve nonlinear, nondifferentiable, and multimodal optimization problems. It can be viewed as an improved version of genetic algorithms and employs a simple cycle of mutation, crossover, and selection operations. DE is used to estimate the optimal parameter structure and its associated values. A synthetic numerical experiment of continuous hydraulic conductivity distribution was conducted to demonstrate the proposed methodology. The results indicate that DE can identify the global optimum effectively and efficiently. A sensitivity analysis of the control parameters (i.e., the population size, mutation scaling factor, crossover rate, and mutation schemes) was performed to examine their influence on the objective function. The proposed DE was then applied to solve a complex parameter-estimation problem for a small desert groundwater basin in Southern California. Hydraulic conductivity, specific yield, specific storage, fault conductance, and recharge components were estimated simultaneously. Comparison of DE and a traditional gradient-based approach (PEST) shows DE to be more robust and efficient. The results of this work not only provide an alternative for PSI in groundwater models, but also extend DE applications towards solving complex, regional-scale water management optimization problems.
Modeling and Simulation of the Off-gas in an Electric Arc Furnace
NASA Astrophysics Data System (ADS)
Meier, Thomas; Gandt, Karima; Echterhof, Thomas; Pfeifer, Herbert
2017-12-01
The following paper describes an approach to process modeling and simulation of the gas phase in an electric arc furnace (EAF). The work presented represents the continuation of research by Logar, Dovžan, and Škrjanc on modeling the heat and mass transfer and the thermochemistry in an EAF. Due to the lack of off-gas measurements, Logar et al. modeled a simplified gas phase under consideration of five gas components and simplified chemical reactions. The off-gas is one of the main continuously measurable EAF process values and the off-gas flow represents a heat loss up to 30 pct of the entire EAF energy input. Therefore, gas phase modeling offers further development opportunities for future EAF optimization. This paper presents the enhancement of the previous EAF gas phase modeling by the consideration of additional gas components and a more detailed heat and mass transfer modeling. In order to avoid the increase of simulation time due to more complex modeling, the EAF model has been newly implemented to use an efficient numerical solver for ordinary differential equations. Compared to the original model, the chemical components H2, H2O, and CH4 are included in the gas phase and equilibrium reactions are implemented. The results show high levels of similarity between the measured operational data from an industrial scale EAF and the theoretical data from the simulation within a reasonable simulation time. In the future, the dynamic EAF model will be applicable for on- and offline optimizations, e.g., to analyze alternative input materials and mode of operations.
NASA Astrophysics Data System (ADS)
El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel
This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.
Ocampo, Cesar
2004-05-01
The modeling, design, and optimization of finite burn maneuvers for a generalized trajectory design and optimization system is presented. A generalized trajectory design and optimization system is a system that uses a single unified framework that facilitates the modeling and optimization of complex spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The modeling and optimization issues associated with the use of controlled engine burn maneuvers of finite thrust magnitude and duration are presented in the context of designing and optimizing a wide class of finite thrust trajectories. Optimal control theory is used examine the optimization of these maneuvers in arbitrary force fields that are generally position, velocity, mass, and are time dependent. The associated numerical methods used to obtain these solutions involve either, the solution to a system of nonlinear equations, an explicit parameter optimization method, or a hybrid parameter optimization that combines certain aspects of both. The theoretical and numerical methods presented here have been implemented in copernicus, a prototype trajectory design and optimization system under development at the University of Texas at Austin.
Numerical optimization of perturbative coils for tokamaks
NASA Astrophysics Data System (ADS)
Lazerson, Samuel; Park, Jong-Kyu; Logan, Nikolas; Boozer, Allen; NSTX-U Research Team
2014-10-01
Numerical optimization of coils which apply three dimensional (3D) perturbative fields to tokamaks is presented. The application of perturbative 3D magnetic fields in tokamaks is now commonplace for control of error fields, resistive wall modes, resonant field drive, and neoclassical toroidal viscosity (NTV) torques. The design of such systems has focused on control of toroidal mode number, with coil shapes based on simple window-pane designs. In this work, a numerical optimization suite based on the STELLOPT 3D equilibrium optimization code is presented. The new code, IPECOPT, replaces the VMEC equilibrium code with the IPEC perturbed equilibrium code, and targets NTV torque by coupling to the PENT code. Fixed boundary optimizations of the 3D fields for the NSTX-U experiment are underway. Initial results suggest NTV torques can be driven by normal field spectrums which are not pitch-resonant with the magnetic field lines. Work has focused on driving core torque with n = 1 and edge torques with n = 3 fields. Optimizations of the coil currents for the planned NSTX-U NCC coils highlight the code's free boundary capability. This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy.
Study of motion of optimal bodies in the soil of grid method
NASA Astrophysics Data System (ADS)
Kotov, V. L.; Linnik, E. Yu
2016-11-01
The paper presents a method of calculating the optimum forms in axisymmetric numerical method based on the Godunov and models elastoplastic soil vedium Grigoryan. Solved two problems in a certain definition of generetrix rotation of the body of a given length and radius of the base, having a minimum impedance and maximum penetration depth. Numerical calculations are carried out by a modified method of local variations, which allows to significantly reduce the number of operations at different representations of generetrix. Significantly simplify the process of searching for optimal body allows the use of a quadratic model of local interaction for preliminary assessments. It is noted the qualitative similarity of the process of convergence of numerical calculations for solving the optimization problem based on local interaction model and within the of continuum mechanics. A comparison of the optimal bodies with absolutely optimal bodies possessing the minimum resistance of penetration below which is impossible to achieve under given constraints on the geometry. It is shown that the conical striker with a variable vertex angle, which equal to the angle of the solution is absolutely optimal body of minimum resistance of penetration for each value of the velocity of implementation will have a final depth of penetration is only 12% more than the traditional body absolutely optimal maximum depth penetration.
Jig-Shape Optimization of a Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2018-01-01
A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least-squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on an in-house object-oriented optimization tool. During the numerical optimization procedure, a design jig-shape is determined by the baseline jig-shape and basis functions. A total of 12 symmetric mode shapes of the cruise-weight configuration, rigid pitch shape, rigid left and right stabilator rotation shapes, and a residual shape are selected as sixteen basis functions. After three optimization runs, the trim shape error distribution is improved, and the maximum trim shape error of 0.9844 inches of the starting configuration becomes 0.00367 inch by the end of the third optimization run.
Generalized bipartite quantum state discrimination problems with sequential measurements
NASA Astrophysics Data System (ADS)
Nakahira, Kenji; Kato, Kentaro; Usuda, Tsuyoshi Sasaki
2018-02-01
We investigate an optimization problem of finding quantum sequential measurements, which forms a wide class of state discrimination problems with the restriction that only local operations and one-way classical communication are allowed. Sequential measurements from Alice to Bob on a bipartite system are considered. Using the fact that the optimization problem can be formulated as a problem with only Alice's measurement and is convex programming, we derive its dual problem and necessary and sufficient conditions for an optimal solution. Our results are applicable to various practical optimization criteria, including the Bayes criterion, the Neyman-Pearson criterion, and the minimax criterion. In the setting of the problem of finding an optimal global measurement, its dual problem and necessary and sufficient conditions for an optimal solution have been widely used to obtain analytical and numerical expressions for optimal solutions. Similarly, our results are useful to obtain analytical and numerical expressions for optimal sequential measurements. Examples in which our results can be used to obtain an analytical expression for an optimal sequential measurement are provided.
Optimal control on bladder cancer growth model with BCG immunotherapy and chemotherapy
NASA Astrophysics Data System (ADS)
Dewi, C.; Trisilowati
2015-03-01
In this paper, an optimal control model of the growth of bladder cancer with BCG (Basil Calmate Guerin) immunotherapy and chemotherapy is discussed. The purpose of this optimal control is to determine the number of BCG vaccine and drug should be given during treatment such that the growth of bladder cancer cells can be suppressed. Optimal control is obtained by applying Pontryagin principle. Furthermore, the optimal control problem is solved numerically using Forward-Backward Sweep method. Numerical simulations show the effectiveness of the vaccine and drug in controlling the growth of cancer cells. Hence, it can reduce the number of cancer cells that is not infected with BCG as well as minimize the cost of the treatment.
A Method for the Selection of Exploration Areas for Unconformity Uranium Deposits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, DeVerle P.; Zaluski, Gerard; Marlatt, James
2009-06-15
The method we propose employs two analyses: (1) exploration simulation and risk valuation and (2) portfolio optimization. The first analysis, implemented by the investment worth system (IWS), uses Monte Carlo simulation to integrate a wide spectrum of uncertain and varied components to a relative frequency histogram for net present value of the exploration investment, which is converted to a risk-adjusted value (RAV). Iterative rerunning of the IWS enables the mapping of the relationship of RAV to magnitude of exploration expenditure, X. The second major analysis uses RAV vs. X maps to identify that subset (portfolio) of areas that maximizes themore » RAV of the firm's multiyear exploration budget. The IWS, which is demonstrated numerically, consists of six components based on the geologic description of a hypothetical basin and project area (PA) and a mix of hypothetical and actual conditions of an unidentified area. The geology is quantified and processed by Bayesian belief networks to produce the geology-based inputs required by the IWS. An exploration investment of $60 M produced a highly skewed distribution of net present value (NPV), having mean and median values of $4,160 M and $139 M, respectively. For hypothetical mining firm Minex, the RAV of the exploration investment of $60 M is only $110.7 M. An RAV that is less than 3% of mean NPV reflects the aversion by Minex to risk as well as the magnitude of risk implicit to the highly skewed NPV distribution and the probability of 0.45 for capital loss. Potential benefits of initiating exploration of a portfolio of areas, as contrasted with one area, include increased marginal productivity of exploration as well as reduced probability for nondiscovery. For an exogenously determined multiyear exploration budget, a conceptual framework for portfolio optimization is developed based on marginal RAV exploration products for candidate PAs. PORTFOLIO, a software developed to implement optimization, allocates exploration to PAs so that the RAV of the exploration budget is maximized. Moreover, PORTFOLIO provides a means to examine the impact of magnitude of budget on the composition of the exploration portfolio and the optimum allocation of exploration to PAs that comprise the portfolio. Using fictitious data for five PAs, a numerical demonstration is provided of the use of PORTFOLIO to identify those PAs that comprise the optimum exploration portfolio and to optimally allocate the multiyear budget across portfolio PAs.« less
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.
1986-01-01
The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.
Optimal implicit 2-D finite differences to model wave propagation in poroelastic media
NASA Astrophysics Data System (ADS)
Itzá, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.
2016-08-01
Numerical modeling of seismic waves in heterogeneous porous reservoir rocks is an important tool for the interpretation of seismic surveys in reservoir engineering. We apply globally optimal implicit staggered-grid finite differences (FD) to model 2-D wave propagation in heterogeneous poroelastic media at a low-frequency range (<10 kHz). We validate the numerical solution by comparing it to an analytical-transient solution obtaining clear seismic wavefields including fast P and slow P and S waves (for a porous media saturated with fluid). The numerical dispersion and stability conditions are derived using von Neumann analysis, showing that over a wide range of porous materials the Courant condition governs the stability and this optimal implicit scheme improves the stability of explicit schemes. High-order explicit FD can be replaced by some lower order optimal implicit FD so computational cost will not be as expensive while maintaining the accuracy. Here, we compute weights for the optimal implicit FD scheme to attain an accuracy of γ = 10-8. The implicit spatial differentiation involves solving tridiagonal linear systems of equations through Thomas' algorithm.
Multidisciplinary optimization of an HSCT wing using a response surface methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giunta, A.A.; Grossman, B.; Mason, W.H.
1994-12-31
Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less
Chhaya, Urvish; Gupte, Akshaya
2010-02-01
Laccase production by solid state fermentation (SSF) using an indigenously isolated litter dwelling fungus Fusarium incarnatum LD-3 was optimized. Fourteen medium components were screened by the initial screening method of Plackett-Burman. Each of the components was screened on the basis of 'p' (probability value) which was above 95% confidence level. Ortho-dianisidine, thiamine HCl and CuSO(4) . 5 H(2)O were identified as significant components for laccase production. The Central Composite Design response surface methodology was then applied to further optimize the laccase production. The optimal concentration of these three medium components for higher laccase production were (g/l): CuSO(4) . 5 H(2)O, 0.01; thiamine HCl, 0.0136 and ortho-dianisidine, 0.388 mM served as an inducer. Wheat straw, 5.0 g was used as a solid substrate. Using this statistical optimization method the laccase production was found to increase from 40 U/g to 650 U/g of wheat straw, which was sixteen times higher than non optimized medium. This is the first report on statistical optimization of laccase production from Fusarium incarnatum LD-3.
Entropy Generation/Availability Energy Loss Analysis Inside MIT Gas Spring and "Two Space" Test Rigs
NASA Technical Reports Server (NTRS)
Ebiana, Asuquo B.; Savadekar, Rupesh T.; Patel, Kaushal V.
2006-01-01
The results of the entropy generation and availability energy loss analysis under conditions of oscillating pressure and oscillating helium gas flow in two Massachusetts Institute of Technology (MIT) test rigs piston-cylinder and piston-cylinder-heat exchanger are presented. Two solution domains, the gas spring (single-space) in the piston-cylinder test rig and the gas spring + heat exchanger (two-space) in the piston-cylinder-heat exchanger test rig are of interest. Sage and CFD-ACE+ commercial numerical codes are used to obtain 1-D and 2-D computer models, respectively, of each of the two solution domains and to simulate the oscillating gas flow and heat transfer effects in these domains. Second law analysis is used to characterize the entropy generation and availability energy losses inside the two solution domains. Internal and external entropy generation and availability energy loss results predicted by Sage and CFD-ACE+ are compared. Thermodynamic loss analysis of simple systems such as the MIT test rigs are often useful to understand some important features of complex pattern forming processes in more complex systems like the Stirling engine. This study is aimed at improving numerical codes for the prediction of thermodynamic losses via the development of a loss post-processor. The incorporation of loss post-processors in Stirling engine numerical codes will facilitate Stirling engine performance optimization. Loss analysis using entropy-generation rates due to heat and fluid flow is a relatively new technique for assessing component performance. It offers a deep insight into the flow phenomena, allows a more exact calculation of losses than is possible with traditional means involving the application of loss correlations and provides an effective tool for improving component and overall system performance.
Simšíková, Michaela; Antalík, Marián; Kaňuchová, Mária; Skvarla, Jiří
2013-08-01
Nanoparticle-protein conjugates have potential for numerous applications due to the combination of the properties of both components. In this paper we studied the conjugation of horse heart cytochrome c with ZnO nanoparticles modified by mercaptoacetic acid (MAA) which may be a material with great potential in anticancer therapy as a consequence of synergic effect of both components. Cyt c adsorption to the ZnO-MAA NPs surface was studied by UV-vis spectroscopy and by a dynamic light scattering in various pH. The results indicate that the optimal pH for the association of protein with modified nanoparticles is in range 5.8-8.5 where 90-96% of cytochrome c was assembled on ZnO-MAA nanoparticles. The interaction of proteins with nanoparticles often results in denaturation or loss of protein function. Our observations from UV-vis spectroscopy and circular dichroism performed preserved protein structure after the interaction with modified nanoparticles. Copyright © 2013 Elsevier B.V. All rights reserved.
Shuttle cryogenic supply system optimization study. Volume 5A-1: Users manual for math models
NASA Technical Reports Server (NTRS)
1973-01-01
The Integrated Math Model for Cryogenic Systems is a flexible, broadly applicable systems parametric analysis tool. The program will effectively accommodate systems of considerable complexity involving large numbers of performance dependent variables such as are found in the individual and integrated cryogen systems. Basically, the program logic structure pursues an orderly progression path through any given system in much the same fashion as is employed for manual systems analysis. The system configuration schematic is converted to an alpha-numeric formatted configuration data table input starting with the cryogen consumer and identifying all components, such as lines, fittings, and valves, each in its proper order and ending with the cryogen supply source assembly. Then, for each of the constituent component assemblies, such as gas generators, turbo machinery, heat exchangers, and accumulators, the performance requirements are assembled in input data tabulations. Systems operating constraints and duty cycle definitions are further added as input data coded to the configuration operating sequence.
Immunofluorescence Analysis of Endogenous and Exogenous Centromere-kinetochore Proteins
Niikura, Yohei; Kitagawa, Katsumi
2016-01-01
"Centromeres" and "kinetochores" refer to the site where chromosomes associate with the spindle during cell division. Direct visualization of centromere-kinetochore proteins during the cell cycle remains a fundamental tool in investigating the mechanism(s) of these proteins. Advanced imaging methods in fluorescence microscopy provide remarkable resolution of centromere-kinetochore components and allow direct observation of specific molecular components of the centromeres and kinetochores. In addition, methods of indirect immunofluorescent (IIF) staining using specific antibodies are crucial to these observations. However, despite numerous reports about IIF protocols, few discussed in detail problems of specific centromere-kinetochore proteins.1-4 Here we report optimized protocols to stain endogenous centromere-kinetochore proteins in human cells by using paraformaldehyde fixation and IIF staining. Furthermore, we report protocols to detect Flag-tagged exogenous CENP-A proteins in human cells subjected to acetone or methanol fixation. These methods are useful in detecting and quantifying endogenous centromere-kinetochore proteins and Flag-tagged CENP-A proteins, including those in human cells. PMID:26967065
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Selective laser melting of Inconel super alloy-a review
NASA Astrophysics Data System (ADS)
Karia, M. C.; Popat, M. A.; Sangani, K. B.
2017-07-01
Additive manufacturing is a relatively young technology that uses the principle of layer by layer addition of material in solid, liquid or powder form to develop a component or product. The quality of additive manufactured part is one of the challenges to be addressed. Researchers are continuously working at various levels of additive manufacturing technologies. One of the significant powder bed processes for met als is Selective Laser Melting (SLM). Laser based processes are finding more attention of researchers and industrial world. The potential of this technique is yet to be fully explored. Due to very high strength and creep resistance Inconel is extensively used nickel based super alloy for manufacturing components for aerospace, automobile and nuclear industries. Due to law content of Aluminum and Titanium, it exhibits good fabricability too. Therefore the alloy is ideally suitable for selective laser melting to manufacture intricate components with high strength requirements. The selection of suitable process for manufacturing for a specific component depends on geometrical complexity, production quantity, and cost and required strength. There are numerous researchers working on various aspects like metallurgical and micro structural investigations and mechanical properties, geometrical accuracy, effects of process parameters and its optimization and mathematical modeling etc. The present paper represents a comprehensive overview of selective laser melting process for Inconel group of alloys.
NASA Astrophysics Data System (ADS)
Shin, Hosop; Park, Jonghyun; Han, Sangwoo; Sastry, Ann Marie; Lu, Wei
2015-03-01
The mechanical instability of the Solid Electrolyte Interphase (SEI) layer in lithium ion (Li-ion) batteries causes significant side reactions resulting in Li-ion consumption and cell impedance rise by forming further SEI layers, which eventually leads to battery capacity fade and power fade. In this paper, the composition-/structure-dependent elasticity of the SEI layer is investigated via Atomic Force Microscopy (AFM) measurements coupled with X-ray Photoelectron Spectroscopy (XPS) analysis, and atomistic calculations. It is observed that the inner layer is stiffer than the outer layer. The measured Young's moduli are mostly in the range of 0.2-4.5 GPa, while some values above 80 GPa are also observed. This wide variation of the observed elastic modulus is elucidated by atomistic calculations with a focus on chemical and structural analysis. The numerical analysis shows the Young's moduli range from 2.4 GPa to 58.1 GPa in the order of the polymeric, organic, and amorphous inorganic components. The crystalline inorganic component (LiF) shows the highest value (135.3 GPa) among the SEI species. This quantitative observation on the elasticity of individual components of the SEI layer must be essential to analyzing the mechanical behavior of the SEI layer and to optimizing and controlling it.
Optimizing Utilization of Detectors
2016-03-01
provide a quantifiable process to determine how much time should be allocated to each task sharing the same asset . This optimized expected time... allocation is calculated by numerical analysis and Monte Carlo simulation. Numerical analysis determines the expectation by involving an integral and...determines the optimum time allocation of the asset by repeatedly running experiments to approximate the expectation of the random variables. This
Time optimal control of a jet engine using a quasi-Hermite interpolation model. M.S. Thesis
NASA Technical Reports Server (NTRS)
Comiskey, J. G.
1979-01-01
This work made preliminary efforts to generate nonlinear numerical models of a two-spooled turbofan jet engine, and subject these models to a known method of generating global, nonlinear, time optimal control laws. The models were derived numerically, directly from empirical data, as a first step in developing an automatic modelling procedure.
NASA Astrophysics Data System (ADS)
Peña, M.; Saha, S.; Wu, X.; Wang, J.; Tripp, P.; Moorthi, S.; Bhattacharjee, P.
2016-12-01
The next version of the operational Climate Forecast System (version 3, CFSv3) will be a fully coupled six-components system with diverse applications to earth system modeling, including weather and climate predictions. This system will couple the earth's atmosphere, land, ocean, sea-ice, waves and aerosols for both data assimilation and modeling. It will also use the NOAA Environmental Modeling System (NEMS) software super structure to couple these components. The CFSv3 is part of the next Unified Global Coupled System (UGCS), which will unify the global prediction systems that are now operational at NCEP. The UGCS is being developed through the efforts of dedicated research and engineering teams and through coordination across many CPO/MAPP and NGGPS groups. During this development phase, the UGCS is being tested for seasonal purposes and undergoes frequent revisions. Each new revision is evaluated to quickly discover, isolate and solve problems that negatively impact its performance. In the UGCS-seasonal model, components (e.g., ocean, sea-ice, atmosphere, etc.) are coupled through a NEMS-based "mediator". In this numerical infrastructure, model diagnostics and forecast validation are carried out, both component by component, and as a whole. The next stage, model optimization, will require enhanced performance diagnostics tools to help prioritize areas of numerical improvements. After the technical development of the UGCS-seasonal is completed, it will become the first realization of the CFSv3. All future development of this system will be carried out by the climate team at NCEP, in scientific collaboration with the groups that developed the individual components, as well as the climate community. A unique challenge to evaluate this unified weather-climate system is the large number of variables, which evolve over a wide range of temporal and spatial scales. A small set of performance measures and scorecard displays are been created, and collaboration and software contributions from research and operational centers are being incorporated. A status of the CFSv3/UGCS-seasonal development and examples of its performance and measuring tools will be presented.
Strategies and Approaches to TPS Design
NASA Technical Reports Server (NTRS)
Kolodziej, Paul
2005-01-01
Thermal protection systems (TPS) insulate planetary probes and Earth re-entry vehicles from the aerothermal heating experienced during hypersonic deceleration to the planet s surface. The systems are typically designed with some additional capability to compensate for both variations in the TPS material and for uncertainties in the heating environment. This additional capability, or robustness, also provides a surge capability for operating under abnormal severe conditions for a short period of time, and for unexpected events, such as meteoroid impact damage, that would detract from the nominal performance. Strategies and approaches to developing robust designs must also minimize mass because an extra kilogram of TPS displaces one kilogram of payload. Because aircraft structures must be optimized for minimum mass, reliability-based design approaches for mechanical components exist that minimize mass. Adapting these existing approaches to TPS component design takes advantage of the extensive work, knowledge, and experience from nearly fifty years of reliability-based design of mechanical components. A Non-Dimensional Load Interference (NDLI) method for calculating the thermal reliability of TPS components is presented in this lecture and applied to several examples. A sensitivity analysis from an existing numerical simulation of a carbon phenolic TPS provides insight into the effects of the various design parameters, and is used to demonstrate how sensitivity analysis may be used with NDLI to develop reliability-based designs of TPS components.
NASA Astrophysics Data System (ADS)
Alegria Mira, Lara; Thrall, Ashley P.; De Temmerman, Niels
2016-02-01
Deployable scissor structures are well equipped for temporary and mobile applications since they are able to change their form and functionality. They are structural mechanisms that transform from a compact state to an expanded, fully deployed configuration. A barrier to the current design and reuse of scissor structures, however, is that they are traditionally designed for a single purpose. Alternatively, a universal scissor component (USC)-a generalized element which can achieve all traditional scissor types-introduces an opportunity for reuse in which the same component can be utilized for different configurations and spans. In this article, the USC is optimized for structural performance. First, an optimized length for the USC is determined based on a trade-off between component weight and structural performance (measured by deflections). Then, topology optimization, using the simulated annealing algorithm, is implemented to determine a minimum weight layout of beams within a single USC component.
Nash equilibrium and multi criterion aerodynamic optimization
NASA Astrophysics Data System (ADS)
Tang, Zhili; Zhang, Lianhe
2016-06-01
Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.
Exploring the quantum speed limit with computer games
NASA Astrophysics Data System (ADS)
Sørensen, Jens Jakob W. H.; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F.
2016-04-01
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. ‘Gamification’—the application of game elements in a non-game context—is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.
Exploring the quantum speed limit with computer games.
Sørensen, Jens Jakob W H; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F
2016-04-14
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. 'Gamification'--the application of game elements in a non-game context--is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.
A technique to remove the tensile instability in weakly compressible SPH
NASA Astrophysics Data System (ADS)
Xu, Xiaoyang; Yu, Peng
2018-01-01
When smoothed particle hydrodynamics (SPH) is directly applied for the numerical simulations of transient viscoelastic free surface flows, a numerical problem called tensile instability arises. In this paper, we develop an optimized particle shifting technique to remove the tensile instability in SPH. The basic equations governing free surface flow of an Oldroyd-B fluid are considered, and approximated by an improved SPH scheme. This includes the implementations of the correction of kernel gradient and the introduction of Rusanov flux into the continuity equation. To verify the effectiveness of the optimized particle shifting technique in removing the tensile instability, the impacting drop, the injection molding of a C-shaped cavity, and the extrudate swell, are conducted. The numerical results obtained are compared with those simulated by other numerical methods. A comparison among different numerical techniques (e.g., the artificial stress) to remove the tensile instability is further performed. All numerical results agree well with the available data.
Sizing a rainwater harvesting cistern by minimizing costs
NASA Astrophysics Data System (ADS)
Pelak, Norman; Porporato, Amilcare
2016-10-01
Rainwater harvesting (RWH) has the potential to reduce water-related costs by providing an alternate source of water, in addition to relieving pressure on public water sources and reducing stormwater runoff. Existing methods for determining the optimal size of the cistern component of a RWH system have various drawbacks, such as specificity to a particular region, dependence on numerical optimization, and/or failure to consider the costs of the system. In this paper a formulation is developed for the optimal cistern volume which incorporates the fixed and distributed costs of a RWH system while also taking into account the random nature of the depth and timing of rainfall, with a focus on RWH to supply domestic, nonpotable uses. With rainfall inputs modeled as a marked Poisson process, and by comparing the costs associated with building a cistern with the costs of externally supplied water, an expression for the optimal cistern volume is found which minimizes the water-related costs. The volume is a function of the roof area, water use rate, climate parameters, and costs of the cistern and of the external water source. This analytically tractable expression makes clear the dependence of the optimal volume on the input parameters. An analysis of the rainfall partitioning also characterizes the efficiency of a particular RWH system configuration and its potential for runoff reduction. The results are compared to the RWH system at the Duke Smart Home in Durham, NC, USA to show how the method could be used in practice.
VISIR-I: small vessels, least-time nautical routes using wave forecasts
NASA Astrophysics Data System (ADS)
Mannarini, G.; Pinardi, N.; Coppini, G.; Oddo, P.; Iafrati, A.
2015-09-01
A new numerical model for the on-demand computation of optimal ship routes based on sea-state forecasts has been developed. The model, named VISIR (discoVerIng Safe and effIcient Routes) is designed to support decision-makers when planning a marine voyage. The first version of the system, VISIR-I, considers medium and small motor vessels with lengths of up to a few tens of meters and a displacement hull. The model is made up of three components: the route optimization algorithm, the mechanical model of the ship, and the environmental fields. The optimization algorithm is based on a graph-search method with time-dependent edge weights. The algorithm is also able to compute a voluntary ship speed reduction. The ship model accounts for calm water and added wave resistance by making use of just the principal particulars of the vessel as input parameters. The system also checks the optimal route for parametric roll, pure loss of stability, and surfriding/broaching-to hazard conditions. Significant wave height, wave spectrum peak period, and wave direction forecast fields are employed as an input. Examples of VISIR-I routes in the Mediterranean Sea are provided. The optimal route may be longer in terms of miles sailed and yet it is faster and safer than the geodetic route between the same departure and arrival locations. Route diversions result from the safety constraints and the fact that the algorithm takes into account the full temporal evolution and spatial variability of the environmental fields.
Optimal placement of excitations and sensors for verification of large dynamical systems
NASA Technical Reports Server (NTRS)
Salama, M.; Rose, T.; Garba, J.
1987-01-01
The computationally difficult problem of the optimal placement of excitations and sensors to maximize the observed measurements is studied within the framework of combinatorial optimization, and is solved numerically using a variation of the simulated annealing heuristic algorithm. Results of numerical experiments including a square plate and a 960 degrees-of-freedom Control of Flexible Structure (COFS) truss structure, are presented. Though the algorithm produces suboptimal solutions, its generality and simplicity allow the treatment of complex dynamical systems which would otherwise be difficult to handle.
NASA Astrophysics Data System (ADS)
Oh, Sehyeong; Lee, Boogeon; Park, Hyungmin; Choi, Haecheon
2017-11-01
We investigate a hovering rhinoceros beetle using numerical simulation and blade element theory. Numerical simulations are performed using an immersed boundary method. In the simulation, the hindwings are modeled as a rigid flat plate, and three-dimensionally scanned elytra and body are used. The results of simulation indicate that the lift force generated by the hindwings alone is sufficient to support the weight, and the elytra generate negligible lift force. Considering the hindwings only, we present a blade element model based on quasi-steady assumptions to identify the mechanisms of aerodynamic force generation and power expenditure in the hovering flight of a rhinoceros beetle. We show that the results from the present blade element model are in excellent agreement with numerical ones. Based on the current blade element model, we find the optimal wing kinematics minimizing the aerodynamic power requirement using a hybrid optimization algorithm combining a clustering genetic algorithm with a gradient-based optimizer. We show that the optimal wing kinematics reduce the aerodynamic power consumption, generating enough lift force to support the weight. This research was supported by a Grant to Bio-Mimetic Robot Research Center Funded by Defense Acquisition Program Administration, and by Agency for Defense Development (UD130070ID) and NRF-2016R1E1A1A02921549 of the MSIP of Korea.
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.
2018-06-01
The necessity to find the global optimum of multiextremal functions arises in many applied problems where finding local solutions is insufficient. One of the desirable properties of global optimization methods is strong homogeneity meaning that a method produces the same sequences of points where the objective function is evaluated independently both of multiplication of the function by a scaling constant and of adding a shifting constant. In this paper, several aspects of global optimization using strongly homogeneous methods are considered. First, it is shown that even if a method possesses this property theoretically, numerically very small and large scaling constants can lead to ill-conditioning of the scaled problem. Second, a new class of global optimization problems where the objective function can have not only finite but also infinite or infinitesimal Lipschitz constants is introduced. Third, the strong homogeneity of several Lipschitz global optimization algorithms is studied in the framework of the Infinity Computing paradigm allowing one to work numerically with a variety of infinities and infinitesimals. Fourth, it is proved that a class of efficient univariate methods enjoys this property for finite, infinite and infinitesimal scaling and shifting constants. Finally, it is shown that in certain cases the usage of numerical infinities and infinitesimals can avoid ill-conditioning produced by scaling. Numerical experiments illustrating theoretical results are described.
Flow optimization study of a batch microfluidics PET tracer synthesizing device
Elizarov, Arkadij M.; Meinhart, Carl; van Dam, R. Michael; Huang, Jiang; Daridon, Antoine; Heath, James R.; Kolb, Hartmuth C.
2010-01-01
We present numerical modeling and experimental studies of flow optimization inside a batch microfluidic micro-reactor used for synthesis of human-scale doses of Positron Emission Tomography (PET) tracers. Novel techniques are used for mixing within, and eluting liquid out of, the coin-shaped reaction chamber. Numerical solutions of the general incompressible Navier Stokes equations along with time-dependent elution scalar field equation for the three dimensional coin-shaped geometry were obtained and validated using fluorescence imaging analysis techniques. Utilizing the approach presented in this work, we were able to identify optimized geometrical and operational conditions for the micro-reactor in the absence of radioactive material commonly used in PET related tracer production platforms as well as evaluate the designed and fabricated micro-reactor using numerical and experimental validations. PMID:21072595
Optimization of Thermoelectric Components for Automobile Waste Heat Recovery Systems
NASA Astrophysics Data System (ADS)
Kumar, Sumeet; Heister, Stephen D.; Xu, Xianfan; Salvador, James R.
2015-10-01
For a typical spark ignition engine approximately 40% of available thermal energy is lost as hot exhaust gas. To improve fuel economy, researchers are currently evaluating technology which exploits exhaust stream thermal power by use of thermoelectric generators (TEGs) that operate on the basis of the Seebeck effect. A 5% improvement in fuel economy, achieved by use of TEG output power, is a stated objective for light-duty trucks and personal automobiles. System modeling of thermoelectric (TE) components requires solution of coupled thermal and electric fluxes through the n and p-type semiconductor legs, given appropriate thermal boundary conditions at the junctions. Such applications have large thermal gradients along the semiconductor legs, and material properties are highly dependent on spatially varying temperature profiles. In this work, one-dimensional heat flux and temperature variations across thermoelectric legs were solved by using an iterative numerical approach to optimize both TE module and TEG designs. Design traits were investigated by assuming use of skutterudite as a thermoelectric material with potential for automotive applications in which exhaust gas and heat exchanger temperatures typically vary from 100°C to over 600°C. Dependence of leg efficiency, thermal fluxes and electric power generation on leg geometry, fill fractions, electric current, thermal boundary conditions, etc., were studied in detail. Optimum leg geometries were computed for a variety of automotive exhaust conditions.
NASA Astrophysics Data System (ADS)
Xu, Jun
Topic 1. An Optimization-Based Approach for Facility Energy Management with Uncertainties. Effective energy management for facilities is becoming increasingly important in view of the rising energy costs, the government mandate on the reduction of energy consumption, and the human comfort requirements. This part of dissertation presents a daily energy management formulation and the corresponding solution methodology for HVAC systems. The problem is to minimize the energy and demand costs through the control of HVAC units while satisfying human comfort, system dynamics, load limit constraints, and other requirements. The problem is difficult in view of the fact that the system is nonlinear, time-varying, building-dependent, and uncertain; and that the direct control of a large number of HVAC components is difficult. In this work, HVAC setpoints are the control variables developed on top of a Direct Digital Control (DDC) system. A method that combines Lagrangian relaxation, neural networks, stochastic dynamic programming, and heuristics is developed to predict the system dynamics and uncontrollable load, and to optimize the setpoints. Numerical testing and prototype implementation results show that our method can effectively reduce total costs, manage uncertainties, and shed the load, is computationally efficient. Furthermore, it is significantly better than existing methods. Topic 2. Power Portfolio Optimization in Deregulated Electricity Markets with Risk Management. In a deregulated electric power system, multiple markets of different time scales exist with various power supply instruments. A load serving entity (LSE) has multiple choices from these instruments to meet its load obligations. In view of the large amount of power involved, the complex market structure, risks in such volatile markets, stringent constraints to be satisfied, and the long time horizon, a power portfolio optimization problem is of critical importance but difficulty for an LSE to serve the load, maximize its profit, and manage risks. In this topic, a mid-term power portfolio optimization problem with risk management is presented. Key instruments are considered, risk terms based on semi-variances of spot market transactions are introduced, and penalties on load obligation violations are added to the objective function to improve algorithm convergence and constraint satisfaction. To overcome the inseparability of the resulting problem, a surrogate optimization framework is developed enabling a decomposition and coordination approach. Numerical testing results show that our method effectively provides decisions for various instruments to maximize profit, manage risks, and is computationally efficient.
Design of an Experimental Facility for Passive Heat Removal in Advanced Nuclear Reactors
NASA Astrophysics Data System (ADS)
Bersano, Andrea
With reference to innovative heat exchangers to be used in passive safety system of Gen- eration IV nuclear reactors and Small Modular Reactors it is necessary to study the natural circulation and the efficiency of heat removal systems. Especially in safety systems, as the decay heat removal system of many reactors, it is increasing the use of passive components in order to improve their availability and reliability during possible accidental scenarios, reducing the need of human intervention. Many of these systems are based on natural circulation, so they require an intense analysis due to the possible instability of the related phenomena. The aim of this thesis work is to build a scaled facility which can reproduce, in a simplified way, the decay heat removal system (DHR2) of the lead-cooled fast reactor ALFRED and, in particular, the bayonet heat exchanger, which transfers heat from lead to water. Given the thermal power to be removed, the natural circulation flow rate and the pressure drops will be studied both experimentally and numerically using the code RELAP5 3D. The first phase of preliminary analysis and project includes: the calculations to design the heat source and heat sink, the choice of materials and components and CAD drawings of the facility. After that, the numerical study is performed using the thermal-hydraulic code RELAP5 3D in order to simulate the behavior of the system. The purpose is to run pretest simulations of the facility to optimize the dimensioning setting the operative parameters (temperature, pressure, etc.) and to chose the most adequate measurement devices. The model of the system is continually developed to better simulate the system studied. High attention is dedicated to the control logic of the system to obtain acceptable results. The initial experimental tests phase consists in cold zero power tests of the facility in order to characterize and to calibrate the pressure drops. In future works the experimental results will be compared to the values predicted by the system code and differences will be discussed with the ultimate goal to qualify RELAP5-3D for the analysis of decay heat removal systems in natural circulation. The numerical data will be also used to understand the key parameters related to the heat transfer in natural circulation and to optimize the operation of the system.
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Clark, Martyn P.
2010-10-01
Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.
NASA Astrophysics Data System (ADS)
Sundara Rajan, R.; Uthayakumar, R.
2017-12-01
In this paper we develop an economic order quantity model to investigate the optimal replenishment policies for instantaneous deteriorating items under inflation and trade credit. Demand rate is a linear function of selling price and decreases negative exponentially with time over a finite planning horizon. Shortages are allowed and partially backlogged. Under these conditions, we model the retailer's inventory system as a profit maximization problem to determine the optimal selling price, optimal order quantity and optimal replenishment time. An easy-to-use algorithm is developed to determine the optimal replenishment policies for the retailer. We also provide optimal present value of profit when shortages are completely backlogged as a special case. Numerical examples are presented to illustrate the algorithm provided to obtain optimal profit. And we also obtain managerial implications from numerical examples to substantiate our model. The results show that there is an improvement in total profit from complete backlogging rather than the items being partially backlogged.
Optimization of the Hartmann-Shack microlens array
NASA Astrophysics Data System (ADS)
de Oliveira, Otávio Gomes; de Lima Monteiro, Davies William
2011-04-01
In this work we propose to optimize the microlens-array geometry for a Hartmann-Shack wavefront sensor. The optimization makes possible that regular microlens arrays with a larger number of microlenses are replaced by arrays with fewer microlenses located at optimal sampling positions, with no increase in the reconstruction error. The goal is to propose a straightforward and widely accessible numerical method to calculate an optimized microlens array for a known aberration statistics. The optimization comprises the minimization of the wavefront reconstruction error and/or the number of necessary microlenses in the array. We numerically generate, sample and reconstruct the wavefront, and use a genetic algorithm to discover the optimal array geometry. Within an ophthalmological context, as a case study, we demonstrate that an array with only 10 suitably located microlenses can be used to produce reconstruction errors as small as those of a 36-microlens regular array. The same optimization procedure can be employed for any application where the wavefront statistics is known.
Optimal control, optimization and asymptotic analysis of Purcell's microswimmer model
NASA Astrophysics Data System (ADS)
Wiezel, Oren; Or, Yizhar
2016-11-01
Purcell's swimmer (1977) is a classic model of a three-link microswimmer that moves by performing periodic shape changes. Becker et al. (2003) showed that the swimmer's direction of net motion is reversed upon increasing the stroke amplitude of joint angles. Tam and Hosoi (2007) used numerical optimization in order to find optimal gaits for maximizing either net displacement or Lighthill's energetic efficiency. In our work, we analytically derive leading-order expressions as well as next-order corrections for both net displacement and energetic efficiency of Purcell's microswimmer. Using these expressions enables us to explicitly show the reversal in direction of motion, as well as obtaining an estimate for the optimal stroke amplitude. We also find the optimal swimmer's geometry for maximizing either displacement or energetic efficiency. Additionally, the gait optimization problem is revisited and analytically formulated as an optimal control system with only two state variables, which can be solved using Pontryagin's maximum principle. It can be shown that the optimal solution must follow a "singular arc". Numerical solution of the boundary value problem is obtained, which exactly reproduces Tam and Hosoi's optimal gait.
Absorbable energy monitoring scheme: new design protocol to test vehicle structural crashworthiness.
Ofochebe, Sunday M; Enibe, Samuel O; Ozoegwu, Chigbogu G
2016-05-01
In vehicle crashworthiness design optimization detailed system evaluation capable of producing reliable results are basically achieved through high-order numerical computational (HNC) models such as the dynamic finite element model, mesh-free model etc. However the application of these models especially during optimization studies is basically challenged by their inherent high demand on computational resources, conditional stability of the solution process, and lack of knowledge of viable parameter range for detailed optimization studies. The absorbable energy monitoring scheme (AEMS) presented in this paper suggests a new design protocol that attempts to overcome such problems in evaluation of vehicle structure for crashworthiness. The implementation of the AEMS involves studying crash performance of vehicle components at various absorbable energy ratios based on a 2DOF lumped-mass-spring (LMS) vehicle impact model. This allows for prompt prediction of useful parameter values in a given design problem. The application of the classical one-dimensional LMS model in vehicle crash analysis is further improved in the present work by developing a critical load matching criterion which allows for quantitative interpretation of the results of the abstract model in a typical vehicle crash design. The adequacy of the proposed AEMS for preliminary vehicle crashworthiness design is demonstrated in this paper, however its extension to full-scale design-optimization problem involving full vehicle model that shows greater structural detail requires more theoretical development.
Towards the optimal design of an uncemented acetabular component using genetic algorithms
NASA Astrophysics Data System (ADS)
Ghosh, Rajesh; Pratihar, Dilip Kumar; Gupta, Sanjay
2015-12-01
Aseptic loosening of the acetabular component (hemispherical socket of the pelvic bone) has been mainly attributed to bone resorption and excessive generation of wear particle debris. The aim of this study was to determine optimal design parameters for the acetabular component that would minimize bone resorption and volumetric wear. Three-dimensional finite element models of intact and implanted pelvises were developed using data from computed tomography scans. A multi-objective optimization problem was formulated and solved using a genetic algorithm. A combination of suitable implant material and corresponding set of optimal thicknesses of the component was obtained from the Pareto-optimal front of solutions. The ultra-high-molecular-weight polyethylene (UHMWPE) component generated considerably greater volumetric wear but lower bone density loss compared to carbon-fibre reinforced polyetheretherketone (CFR-PEEK) and ceramic. CFR-PEEK was located in the range between ceramic and UHMWPE. Although ceramic appeared to be a viable alternative to cobalt-chromium-molybdenum alloy, CFR-PEEK seems to be the most promising alternative material.
Global Design Optimization for Aerodynamics and Rocket Propulsion Components
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)
2000-01-01
Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design practices and the need for future research are identified.
NASA Astrophysics Data System (ADS)
Schmitz, Gunnar; Christiansen, Ove
2018-06-01
We study how with means of Gaussian Process Regression (GPR) geometry optimizations, which rely on numerical gradients, can be accelerated. The GPR interpolates a local potential energy surface on which the structure is optimized. It is found to be efficient to combine results on a low computational level (HF or MP2) with the GPR-calculated gradient of the difference between the low level method and the target method, which is a variant of explicitly correlated Coupled Cluster Singles and Doubles with perturbative Triples correction CCSD(F12*)(T) in this study. Overall convergence is achieved if both the potential and the geometry are converged. Compared to numerical gradient-based algorithms, the number of required single point calculations is reduced. Although introducing an error due to the interpolation, the optimized structures are sufficiently close to the minimum of the target level of theory meaning that the reference and predicted minimum only vary energetically in the μEh regime.
Parameter assessment for virtual Stackelberg game in aerodynamic shape optimization
NASA Astrophysics Data System (ADS)
Wang, Jing; Xie, Fangfang; Zheng, Yao; Zhang, Jifa
2018-05-01
In this paper, parametric studies of virtual Stackelberg game (VSG) are conducted to assess the impact of critical parameters on aerodynamic shape optimization, including design cycle, split of design variables and role assignment. Typical numerical cases, including the inverse design and drag reduction design of airfoil, have been carried out. The numerical results confirm the effectiveness and efficiency of VSG. Furthermore, the most significant parameters are identified, e.g. the increase of design cycle can improve the optimization results but it will also add computational burden. These studies will maximize the productivity of the effort in aerodynamic optimization for more complicated engineering problems, such as the multi-element airfoil and wing-body configurations.
Optimal control applied to a model for species augmentation.
Bodine, Erin N; Gross, Louis J; Lenhart, Suzanne
2008-10-01
Species augmentation is a method of reducing species loss via augmenting declining or threatened populations with individuals from captive-bred or stable, wild populations. In this paper, we develop a differential equations model and optimal control formulation for a continuous time augmentation of a general declining population. We find a characterization for the optimal control and show numerical results for scenarios of different illustrative parameter sets. The numerical results provide considerably more detail about the exact dynamics of optimal augmentation than can be readily intuited. The work and results presented in this paper are a first step toward building a general theory of population augmentation, which accounts for the complexities inherent in many conservation biology applications.
Kang, Seok-Won; Fragala, Joe; Kim, Su-Ho; Banerjee, Debjyoti
2017-11-01
This paper presents a design optimization method based on theoretical analysis and numerical calculations, using a commercial multi-physics solver (e.g., ANSYS and ESI CFD-ACE+), for a 3D continuous model, to analyze the bending characteristics of an electrically heated bimorph microcantilever. The results from the theoretical calculation and numerical analysis are compared with those measured using a CCD camera and magnification lenses for a chip level microcantilever array fabricated in this study. The bimorph microcantilevers are thermally actuated by joule heating generated by a 0.4 μm thin-film Au heater deposited on 0.6 μm Si₃N₄ microcantilevers. The initial deflections caused by residual stress resulting from the thermal bonding of two metallic layers with different coefficients of thermal expansion (CTEs) are additionally considered, to find the exact deflected position. The numerically calculated total deflections caused by electrical actuation show differences of 10%, on average, with experimental measurements in the operating current region (i.e., ~25 mA) to prevent deterioration by overheating. Bimorph microcantilevers are promising components for use in various MEMS (Micro-Electro-Mechanical System) sensing applications, and their deflection characteristics in static mode sensing are essential for detecting changes in thermal stress on the surface of microcantilevers.
Kim, Su-Ho; Banerjee, Debjyoti
2017-01-01
This paper presents a design optimization method based on theoretical analysis and numerical calculations, using a commercial multi-physics solver (e.g., ANSYS and ESI CFD-ACE+), for a 3D continuous model, to analyze the bending characteristics of an electrically heated bimorph microcantilever. The results from the theoretical calculation and numerical analysis are compared with those measured using a CCD camera and magnification lenses for a chip level microcantilever array fabricated in this study. The bimorph microcantilevers are thermally actuated by joule heating generated by a 0.4 μm thin-film Au heater deposited on 0.6 μm Si3N4 microcantilevers. The initial deflections caused by residual stress resulting from the thermal bonding of two metallic layers with different coefficients of thermal expansion (CTEs) are additionally considered, to find the exact deflected position. The numerically calculated total deflections caused by electrical actuation show differences of 10%, on average, with experimental measurements in the operating current region (i.e., ~25 mA) to prevent deterioration by overheating. Bimorph microcantilevers are promising components for use in various MEMS (Micro-Electro-Mechanical System) sensing applications, and their deflection characteristics in static mode sensing are essential for detecting changes in thermal stress on the surface of microcantilevers. PMID:29104265
Probabilistic numerics and uncertainty in computations
Hennig, Philipp; Osborne, Michael A.; Girolami, Mark
2015-01-01
We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321
Probabilistic numerics and uncertainty in computations.
Hennig, Philipp; Osborne, Michael A; Girolami, Mark
2015-07-08
We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.
Optimization of fringe-type laser anemometers for turbine engine component testing
NASA Technical Reports Server (NTRS)
Seasholtz, R. G.; Oberle, L. G.; Weikle, D. H.
1984-01-01
The fringe type laser anemometer is analyzed using the Cramer-Rao bound for the variance of the estimate of the Doppler frequency as a figure of merit. Mie scattering theory is used to calculate the Doppler signal wherein both the amplitude and phase of the scattered light are taken into account. The noise from wall scatter is calculated using the wall bidirectional reflectivity and the irradiance of the incident beams. A procedure is described to determine the optimum aperture mask for the probe volume located a given distance from a wall. The expected performance of counter type processors is also discussed in relation to the Cramer-Rao bound. Numerical examples are presented for a coaxial backscatter anemometer.
Inflationary preheating dynamics with two-species condensates
NASA Astrophysics Data System (ADS)
Zache, T. V.; Kasper, V.; Berges, J.
2017-06-01
We investigate both analytically and numerically a two-component ultracold atom system in one spatial dimension. The model features a tachyonic instability, which incorporates characteristic aspects of the mechanisms for particle production in early universe inflaton models. We establish a direct correspondence between measurable macroscopic growth rates for occupation numbers of the ultracold Bose gas and the underlying microscopic processes in terms of Feynman loop diagrams. We analyze several existing ultracold atom setups featuring dynamical instabilities and propose optimized protocols for their experimental realization. We demonstrate that relevant dynamical processes can be enhanced using a seeding procedure for unstable modes and clarify the role of initial quantum fluctuations and the generation of a nonlinear secondary stage for the amplification of modes.
Light-modulating pressure sensor with integrated flexible organic light-emitting diode.
Cheneler, D; Vervaeke, M; Thienpont, H
2014-05-01
Organic light-emitting diodes (OLEDs) are used almost exclusively for display purposes. Even when implemented as a sensing component, it is rarely in a manner that exploits the possible compliance of the OLED. Here it is shown that OLEDs can be integrated into compliant mechanical micro-devices making a new range of applications possible. A light-modulating pressure sensor is considered, whereby the OLED is integrated with a silicon membrane. It is shown that such devices have potential and advantages over current measurement techniques. An analytical model has been developed that calculates the response of the device. Ray tracing numerical simulations verify the theory and show that the design can be optimized to maximize the resolution of the sensor.
Determination of optimal tool parameters for hot mandrel bending of pipe elbows
NASA Astrophysics Data System (ADS)
Tabakajew, Dmitri; Homberg, Werner
2018-05-01
Seamless pipe elbows are important components in mechanical, plant and apparatus engineering. Typically, they are produced by the so-called `Hamburg process'. In this hot forming process, the initial pipes are subsequently pushed over an ox-horn-shaped bending mandrel. The geometric shape of the mandrel influences the diameter, bending radius and wall thickness distribution of the pipe elbow. This paper presents the numerical simulation model of the hot mandrel bending process created to ensure that the optimum mandrel geometry can be determined at an early stage. A fundamental analysis was conducted to determine the influence of significant parameters on the pipe elbow quality. The chosen methods and approach as well as the corresponding results are described in this paper.
Optimal trajectories of aircraft and spacecraft
NASA Technical Reports Server (NTRS)
Miele, A.
1990-01-01
Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful engineering compromise between energy requirements and aerodynamics heating requirements.
The numerical methods for the development of the mixture region in the vapor explosion simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y.; Ohashi, H.; Akiyama, M.
An attempt to numerically simulate the process of the vapor explosion with a general multi-component and multi-dimension code is being challenged. Because of the rapid change of the flow field and extremely nonuniform distribution of the components in the system of the vapor explosion, the numerical divergence and diffusion are subject to occur easily. A dispersed component model and a multiregion scheme, by which these difficulties can be effectively overcome, were proposed. The simulations have been performed for the processes of the premixing and the fragmentation propagation in the vapor explosion.
Extremal Optimization: Methods Derived from Co-Evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boettcher, S.; Percus, A.G.
1999-07-13
We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.« less
Assessment of Lithium-based Battery Electrolytes Developed under the NASA PERS Program
NASA Technical Reports Server (NTRS)
Bennett, William R.; Baldwin, Richard S.
2006-01-01
Recently, NASA formally completed the Polymer Energy Rechargeable System (PERS) Program, which was established in 2000 in collaboration with the Air Force Research Laboratory (AFRL) to support the development of polymer-based, lithium-based cell chemistries and battery technologies to address the next generation of aerospace applications and mission needs. The goal of this program was to ultimately develop an advanced, space-qualified battery technology, which embodied a solid polymer electrolyte (SPE) and complementary components, with improved performance characteristics that would address future aerospace battery requirements. Programmatically, the PERS initiative exploited both interagency collaborations to address common technology and engineering issues and the active participation of academia and private industry. The initial program phases focused on R&D activities to address the critical technical issues and challenges at the cell level. A variety of cell and polymeric electrolyte concepts were pursued as part of the development efforts undertaken at numerous governmental, industrial and academic laboratories. Numerous candidate electrolyte materials were developed, synthesized and optimized for evaluation. Utilizing the component screening facility and the "standardized" test procedures developed at the NASA Glenn Research Center, electrochemical screening and performance evaluations of promising candidate materials were completed. This overview summarizes test results for a variety of candidate electrolyte materials that were developed under the PERS Program. Electrolyte properties are contrasted and compared to the original project goals, and the strengths and weaknesses of the electrolyte chemistries are discussed. Limited cycling data for full-cells using lithium metal and vanadium oxide electrodes are also presented. Based on measured electrolyte properties, the projected performance characteristics and temperature limitations of batteries utilizing the advanced electrolytes and components have been estimated. Limitations for the achievement of practical performance levels are also discussed, as well as needs for future research and development.
Wu, Jun; Hu, Xie-he; Chen, Sheng; Chu, Jian
2003-01-01
The closed-loop stability issue of finite-precision realizations was investigated for digital controllers implemented in block-floating-point format. The controller coefficient perturbation was analyzed resulting from using finite word length (FWL) block-floating-point representation scheme. A block-floating-point FWL closed-loop stability measure was derived which considers both the dynamic range and precision. To facilitate the design of optimal finite-precision controller realizations, a computationally tractable block-floating-point FWL closed-loop stability measure was then introduced and the method of computing the value of this measure for a given controller realization was developed. The optimal controller realization is defined as the solution that maximizes the corresponding measure, and a numerical optimization approach was adopted to solve the resulting optimal realization problem. A numerical example was used to illustrate the design procedure and to compare the optimal controller realization with the initial realization.
Springback optimization in automotive Shock Absorber Cup with Genetic Algorithm
NASA Astrophysics Data System (ADS)
Kakandikar, Ganesh; Nandedkar, Vilas
2018-02-01
Drawing or forming is a process normally used to achieve a required component form from a metal blank by applying a punch which radially draws the blank into the die by a mechanical or hydraulic action or combining both. When the component is drawn for more depth than the diameter, it is usually seen as deep drawing, which involves complicated states of material deformation. Due to the radial drawing of the material as it enters the die, radial drawing stress occurs in the flange with existence of the tangential compressive stress. This compression generates wrinkles in the flange. Wrinkling is unwanted phenomenon and can be controlled by application of a blank-holding force. Tensile stresses cause thinning in the wall region of the cup. Three main types of the errors occur in such a process are wrinkling, fracturing and springback. This paper reports a work focused on the springback and control. Due to complexity of the process, tool try-outs and experimentation may be costly, bulky and time consuming. Numerical simulation proves to be a good option for studying the process and developing a control strategy for reducing the springback. Finite-element based simulations have been used popularly for such purposes. In this study, the springback in deep drawing of an automotive Shock Absorber Cup is simulated with finite element method. Taguchi design of experiments and analysis of variance are used to analyze the influencing process parameters on the springback. Mathematical relations are developed to relate the process parameters and the resulting springback. The optimization problem is formulated for the springback, referring to the displacement magnitude in the selected sections. Genetic Algorithm is then applied for process optimization with an objective to minimize the springback. The results indicate that a better prediction of the springback and process optimization could be achieved with a combined use of these methods and tools.
Adapted all-numerical correlator for face recognition applications
NASA Astrophysics Data System (ADS)
Elbouz, M.; Bouzidi, F.; Alfalou, A.; Brosseau, C.; Leonard, I.; Benkelfat, B.-E.
2013-03-01
In this study, we suggest and validate an all-numerical implementation of a VanderLugt correlator which is optimized for face recognition applications. The main goal of this implementation is to take advantage of the benefits (detection, localization, and identification of a target object within a scene) of correlation methods and exploit the reconfigurability of numerical approaches. This technique requires a numerical implementation of the optical Fourier transform. We pay special attention to adapt the correlation filter to this numerical implementation. One main goal of this work is to reduce the size of the filter in order to decrease the memory space required for real time applications. To fulfil this requirement, we code the reference images with 8 bits and study the effect of this coding on the performances of several composite filters (phase-only filter, binary phase-only filter). The saturation effect has for effect to decrease the performances of the correlator for making a decision when filters contain up to nine references. Further, an optimization is proposed based for an optimized segmented composite filter. Based on this approach, we present tests with different faces demonstrating that the above mentioned saturation effect is significantly reduced while minimizing the size of the learning data base.
Short-term airing by natural ventilation - modeling and control strategies.
Perino, M; Heiselberg, P
2009-10-01
The need to improve the energy efficiency of buildings requires new and more efficient ventilation systems. It has been demonstrated that innovative operating concepts that make use of natural ventilation seem to be more appreciated by occupants. This kind of system frequently integrates traditional mechanical ventilation components with natural ventilation devices, such as motorized windows and louvers. Among the various ventilation strategies that are currently available, buoyancy driven single-sided natural ventilation has proved to be very effective and can provide high air change rates for temperature and IAQ control. However, in order to promote a wider applications of these systems, an improvement in the knowledge of their working principles and the availability of new design and simulation tools is necessary. In this context, the paper analyses and presents the results of a research that was aimed at developing and validating numerical models for the analysis of buoyancy driven single-sided natural ventilation systems. Once validated, these models can be used to optimize control strategies in order to achieve satisfactory indoor comfort conditions and IAQ. Practical Implications Numerical and experimental analyses have proved that short-term airing by intermittent ventilation is an effective measure to satisfactorily control IAQ. Different control strategies have been investigated to optimize the capabilities of the systems. The proposed zonal model has provided good performances and could be adopted as a design tool, while CFD simulations can be profitably used for detailed studies of the pollutant concentration distribution in a room and to address local discomfort problems.
Tightening force and torque of nonlocking screws in a reverse shoulder prosthesis.
Terrier, A; Kochbeck, S H; Merlini, F; Gortchacow, M; Pioletti, D P; Farron, A
2010-07-01
Reversed shoulder arthroplasty is an accepted treatment for glenohumeral arthritis associated to rotator cuff deficiency. For most reversed shoulder prostheses, the baseplate of the glenoid component is uncemented and its primary stability is provided by a central peg and peripheral screws. Because of the importance of the primary stability for a good osteo-integration of the baseplate, the optimal fixation of the screws is crucial. In particular, the amplitude of the tightening force of the nonlocking screws is clearly associated to this stability. Since this force is unknown, it is currently not accounted for in experimental or numerical analyses. Thus, the primary goal of this work is to measure this tightening force experimentally. In addition, the tightening torque was also measured, to estimate an optimal surgical value. An experimental setup with an instrumented baseplate was developed to measure simultaneously the tightening force, tightening torque and screwing angle, of the nonlocking screws of the Aquealis reversed prosthesis. In addition, the amount of bone volume around each screw was measured with a micro-CT. Measurements were performed on 6 human cadaveric scapulae. A statistically correlated relationship (p<0.05, R=0.83) was obtained between the maximal tightening force and the bone volume. The relationship between the tightening torque and the bone volume was not statistically significant. The experimental relationship presented in this paper can be used in numerical analyses to improve the baseplate fixation in the glenoid bone. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Multimaterial topology optimization of contact problems using phase field regularization
NASA Astrophysics Data System (ADS)
Myśliński, Andrzej
2018-01-01
The numerical method to solve multimaterial topology optimization problems for elastic bodies in unilateral contact with Tresca friction is developed in the paper. The displacement of the elastic body in contact is governed by elliptic equation with inequality boundary conditions. The body is assumed to consists from more than two distinct isotropic elastic materials. The materials distribution function is chosen as the design variable. Since high contact stress appears during the contact phenomenon the aim of the structural optimization problem is to find such topology of the domain occupied by the body that the normal contact stress along the boundary of the body is minimized. The original cost functional is regularized using the multiphase volume constrained Ginzburg-Landau energy functional rather than the perimeter functional. The first order necessary optimality condition is recalled and used to formulate the generalized gradient flow equations of Allen-Cahn type. The optimal topology is obtained as the steady state of the phase transition governed by the generalized Allen-Cahn equation. As the interface width parameter tends to zero the transition of the phase field model to the level set model is studied. The optimization problem is solved numerically using the operator splitting approach combined with the projection gradient method. Numerical examples confirming the applicability of the proposed method are provided and discussed.
Solving fractional optimal control problems within a Chebyshev-Legendre operational technique
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Ezz-Eldien, S. S.; Doha, E. H.; Abdelkawy, M. A.; Baleanu, D.
2017-06-01
In this manuscript, we report a new operational technique for approximating the numerical solution of fractional optimal control (FOC) problems. The operational matrix of the Caputo fractional derivative of the orthonormal Chebyshev polynomial and the Legendre-Gauss quadrature formula are used, and then the Lagrange multiplier scheme is employed for reducing such problems into those consisting of systems of easily solvable algebraic equations. We compare the approximate solutions achieved using our approach with the exact solutions and with those presented in other techniques and we show the accuracy and applicability of the new numerical approach, through two numerical examples.
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
Integrated Design of Downwind Land-Based Wind Turbines using Analytic Gradients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ning, Andrew; Petch, Derek
2016-12-01
Wind turbines are complex systems where component-level changes can have significant system-level effects. Effective wind turbine optimization generally requires an integrated analysis approach with a large number of design variables. Optimizing across large variable sets is orders of magnitude more efficient with gradient-based methods as compared with gradient-free method, particularly when using exact gradients. We have developed a wind turbine analysis set of over 100 components where 90% of the models provide numerically exact gradients through symbolic differentiation, automatic differentiation, and adjoint methods. This framework is applied to a specific design study focused on downwind land-based wind turbines. Downwind machinesmore » are of potential interest for large wind turbines where the blades are often constrained by the stiffness required to prevent a tower strike. The mass of these rotor blades may be reduced by utilizing a downwind configuration where the constraints on tower strike are less restrictive. The large turbines of this study range in power rating from 5-7MW and in diameter from 105m to 175m. The changes in blade mass and power production have important effects on the rest of the system, and thus the nacelle and tower systems are also optimized. For high-speed wind sites, downwind configurations do not appear advantageous. The decrease in blade mass (10%) is offset by increases in tower mass caused by the bending moment from the rotor-nacelle-assembly. For low-wind speed sites, the decrease in blade mass is more significant (25-30%) and shows potential for modest decreases in overall cost of energy (around 1-2%).« less
Optimal Micropatterns in 2D Transport Networks and Their Relation to Image Inpainting
NASA Astrophysics Data System (ADS)
Brancolini, Alessio; Rossmanith, Carolin; Wirth, Benedikt
2018-04-01
We consider two different variational models of transport networks: the so-called branched transport problem and the urban planning problem. Based on a novel relation to Mumford-Shah image inpainting and techniques developed in that field, we show for a two-dimensional situation that both highly non-convex network optimization tasks can be transformed into a convex variational problem, which may be very useful from analytical and numerical perspectives. As applications of the convex formulation, we use it to perform numerical simulations (to our knowledge this is the first numerical treatment of urban planning), and we prove a lower bound for the network cost that matches a known upper bound (in terms of how the cost scales in the model parameters) which helps better understand optimal networks and their minimal costs.
Osuch, Tomasz; Markowski, Konrad; Jędrzejewski, Kazimierz
2015-06-10
A versatile numerical model for spectral transmission/reflection, group delay characteristic analysis, and design of tapered fiber Bragg gratings (TFBGs) is presented. This approach ensures flexibility with defining both distribution of refractive index change of the gratings (including apodization) and shape of the taper profile. Additionally, sensing and tunable dispersion properties of the TFBGs were fully examined, considering strain-induced effects. The presented numerical approach, together with Pareto optimization, were also used to design the best tanh apodization profiles of the TFBG in terms of maximizing its spectral width with simultaneous minimization of the group delay oscillations. Experimental verification of the model confirms its correctness. The combination of model versatility and possibility to define the other objective functions of Pareto optimization creates a universal tool for TFBG analysis and design.
Fuel-optimal low-thrust formation reconfiguration via Radau pseudospectral method
NASA Astrophysics Data System (ADS)
Li, Jing
2016-07-01
This paper investigates fuel-optimal low-thrust formation reconfiguration near circular orbit. Based on the Clohessy-Wiltshire equations, first-order necessary optimality conditions are derived from the Pontryagin's maximum principle. The fuel-optimal impulsive solution is utilized to divide the low-thrust trajectory into thrust and coast arcs. By introducing the switching times as optimization variables, the fuel-optimal low-thrust formation reconfiguration is posed as a nonlinear programming problem (NLP) via direct transcription using multiple-phase Radau pseudospectral method (RPM), which is then solved by a sparse nonlinear optimization software SNOPT. To facilitate optimality verification and, if necessary, further refinement of the optimized solution of the NLP, formulas for mass costate estimation and initial costates scaling are presented. Numerical examples are given to show the application of the proposed optimization method. To fix the problem, generic fuel-optimal low-thrust formation reconfiguration can be simplified as reconfiguration without any initial and terminal coast arcs, whose optimal solutions can be efficiently obtained from the multiple-phase RPM at the cost of a slight fuel increment. Finally, influence of the specific impulse and maximum thrust magnitude on the fuel-optimal low-thrust formation reconfiguration is analyzed. Numerical results shown the links and differences between the fuel-optimal impulsive and low-thrust solutions.
Inversion of Robin coefficient by a spectral stochastic finite element approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin Bangti; Zou Jun
2008-03-01
This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haslinger, Jaroslav, E-mail: hasling@karlin.mff.cuni.cz; Stebel, Jan, E-mail: stebel@math.cas.cz
2011-04-15
We study the shape optimization problem for the paper machine headbox which distributes a mixture of water and wood fibers in the paper making process. The aim is to find a shape which a priori ensures the given velocity profile on the outlet part. The mathematical formulation leads to the optimal control problem in which the control variable is the shape of the domain representing the header, the state problem is represented by the generalized Navier-Stokes system with nontrivial boundary conditions. This paper deals with numerical aspects of the problem.
Optimal moving grids for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Wathen, A. J.
1989-01-01
Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of partial differential equation solutions in the least squares norm.
Optimal moving grids for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Wathen, A. J.
1992-01-01
Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of PDE solutions in the least-squares norm are reported.
A modified form of conjugate gradient method for unconstrained optimization problems
NASA Astrophysics Data System (ADS)
Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa
2016-06-01
Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.
Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices
NASA Astrophysics Data System (ADS)
Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie
2016-09-01
Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes’ (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body.
The NonConforming Virtual Element Method for the Stokes Equations
Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco
2016-01-01
In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco
In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less
Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices
Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie
2016-01-01
Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes’ (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body. PMID:27670953
Growth of zinc selenide single crystals by physical vapor transport in microgravity
NASA Technical Reports Server (NTRS)
Rosenberger, Franz
1993-01-01
The goals of this research were the optimization of growth parameters for large (20 mm diameter and length) zinc selenide single crystals with low structural defect density, and the development of a 3-D numerical model for the transport rates to be expected in physical vapor transport under a given set of thermal and geometrical boundary conditions, in order to provide guidance for an advantageous conduct of the growth experiments. In the crystal growth studies, it was decided to exclusively apply the Effusive Ampoule PVT technique (EAPVT) to the growth of ZnSe. In this technique, the accumulation of transport-limiting gaseous components at the growing crystal is suppressed by continuous effusion to vacuum of part of the vapor contents. This is achieved through calibrated leaks in one of the ground joints of the ampoule. Regarding the PVT transport rates, a 3-D spectral code was modified. After introduction of the proper boundary conditions and subroutines for the composition-dependent transport properties, the code reproduced the experimentally determined transport rates for the two cases with strongest convective flux contributions to within the experimental and numerical error.
Efficient and Robust Optimization for Building Energy Simulation
Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda
2016-01-01
Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell’s Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell’s method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell’s Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell’s Hybrid method presently used in HVACSIM+. PMID:27325907
Efficient and Robust Optimization for Building Energy Simulation.
Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda
2016-06-15
Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell's Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell's method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell's Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell's Hybrid method presently used in HVACSIM+.
Numerical study on the sequential Bayesian approach for radioactive materials detection
NASA Astrophysics Data System (ADS)
Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng
2013-01-01
A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.
NASA Astrophysics Data System (ADS)
Bürger, Raimund; Kumar, Sarvesh; Ruiz-Baier, Ricardo
2015-10-01
The sedimentation-consolidation and flow processes of a mixture of small particles dispersed in a viscous fluid at low Reynolds numbers can be described by a nonlinear transport equation for the solids concentration coupled with the Stokes problem written in terms of the mixture flow velocity and the pressure field. Here both the viscosity and the forcing term depend on the local solids concentration. A semi-discrete discontinuous finite volume element (DFVE) scheme is proposed for this model. The numerical method is constructed on a baseline finite element family of linear discontinuous elements for the approximation of velocity components and concentration field, whereas the pressure is approximated by piecewise constant elements. The unique solvability of both the nonlinear continuous problem and the semi-discrete DFVE scheme is discussed, and optimal convergence estimates in several spatial norms are derived. Properties of the model and the predicted space accuracy of the proposed formulation are illustrated by detailed numerical examples, including flows under gravity with changing direction, a secondary settling tank in an axisymmetric setting, and batch sedimentation in a tilted cylindrical vessel.
Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices.
Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie
2016-09-27
Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes' (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body.
NASA Astrophysics Data System (ADS)
Wu, Jun; Fan, Ting-Bo; Xu, Di; Zhang, Dong
2014-10-01
Sub-harmonic component generated from microbubbles is proven to be potentially used in noninvasive blood pressure measurement. Both theoretical and experimental studies are performed in the present work to investigate the dependence of the sub-harmonic generation on the overpressure with different excitation pressure amplitudes and pulse lengths. With 4-MHz ultrasound excitation at an applied acoustic pressure amplitude of 0.24 MPa, the measured sub-harmonic amplitude exhibits a decreasing change as overpressure increases; while non-monotonic change is observed for the applied acoustic pressures of 0.36 MPa and 0.48 MPa, and the peak position in the curve of the sub-harmonic response versus the overpressure shifts toward higher overpressure as the excitation pressure amplitude increases. Furthermore, the exciting pulse with long duration could lead to a better sensitivity of the sub-harmonic response to overpressure. The measured results are explained by the numerical simulations based on the Marmottant model. The numerical simulations qualitatively accord with the measured results. This work might provide a preliminary proof for the optimization of the noninvasive blood pressure measurement through using sub-harmonic generation from microbubbles.
Advances in computational design and analysis of airbreathing propulsion systems
NASA Technical Reports Server (NTRS)
Klineberg, John M.
1989-01-01
The development of commercial and military aircraft depends, to a large extent, on engine manufacturers being able to achieve significant increases in propulsion capability through improved component aerodynamics, materials, and structures. The recent history of propulsion has been marked by efforts to develop computational techniques that can speed up the propulsion design process and produce superior designs. The availability of powerful supercomputers, such as the NASA Numerical Aerodynamic Simulator, and the potential for even higher performance offered by parallel computer architectures, have opened the door to the use of multi-dimensional simulations to study complex physical phenomena in propulsion systems that have previously defied analysis or experimental observation. An overview of several NASA Lewis research efforts is provided that are contributing toward the long-range goal of a numerical test-cell for the integrated, multidisciplinary design, analysis, and optimization of propulsion systems. Specific examples in Internal Computational Fluid Mechanics, Computational Structural Mechanics, Computational Materials Science, and High Performance Computing are cited and described in terms of current capabilities, technical challenges, and future research directions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canavan, G.H.
Optimizations of missile allocation based on linearized exchange equations produce accurate allocations, but the limits of validity of the linearization are not known. These limits are explored in the context of the upload of weapons by one side to initially small, equal forces of vulnerable and survivable weapons. The analysis compares analytic and numerical optimizations and stability induces based on aggregated interactions of the two missile forces, the first and second strikes they could deliver, and they resulting costs. This note discusses the costs and stability indices induced by unilateral uploading of weapons to an initially symmetrical low force configuration.more » These limits are quantified for forces with a few hundred missiles by comparing analytic and numerical optimizations of first strike costs. For forces of 100 vulnerable and 100 survivable missiles on each side, the analytic optimization agrees closely with the numerical solution. For 200 vulnerable and 200 survivable missiles on each side, the analytic optimization agrees with the induces to within about 10%, but disagrees with the allocation of the side with more weapons by about 50%. The disagreement comes from the interaction of the possession of more weapons with the shift of allocation from missiles to value that they induce.« less
Numerical optimization of actuator trajectories for ITER hybrid scenario profile evolution
NASA Astrophysics Data System (ADS)
van Dongen, J.; Felici, F.; Hogeweij, G. M. D.; Geelen, P.; Maljaars, E.
2014-12-01
Optimal actuator trajectories for an ITER hybrid scenario ramp-up are computed using a numerical optimization method. For both L-mode and H-mode scenarios, the time trajectory of plasma current, EC heating and current drive distribution is determined that minimizes a chosen cost function, while satisfying constraints. The cost function is formulated to reflect two desired properties of the plasma q profile at the end of the ramp-up. The first objective is to maximize the ITG turbulence threshold by maximizing the volume-averaged s/q ratio. The second objective is to achieve a stationary q profile by having a flat loop voltage profile. Actuator and physics-derived constraints are included, imposing limits on plasma current, ramp rates, internal inductance and q profile. This numerical method uses the fast control-oriented plasma profile evolution code RAPTOR, which is successfully benchmarked against more complete CRONOS simulations for L-mode and H-mode mode ITER hybrid scenarios. It is shown that the optimized trajectories computed using RAPTOR also result in an improved ramp-up scenario for CRONOS simulations using the same input trajectories. Furthermore, the optimal trajectories are shown to vary depending on the precise timing of the L-H transition.
NASA Astrophysics Data System (ADS)
Komodromos, A.; Tekkaya, A. E.; Hofmann, J.; Fleischer, J.
2018-05-01
Since electric motors are gaining in importance in many fields of application, e.g. hybrid electric vehicles, optimization of the linear coil winding process greatly contributes to an increase in productivity and flexibility. For the investigation of the forming behavior of the winding wire the material behavior is characterized in different experimental setups. Numerical examinatons of the linear winding process are carried out in a case study for a rectangular bobbin in order to analyze the influence of forming parameters on the resulting properties of the wound coil. Besides the numerical investigation of the linear winding method by using the finite element method (FEM), a multi-body dynamics (MBD) simulation is carried out. The multi-body dynamics simulation is necessary to represent the movement of the bodies as well as the connection of the components during winding. The finite element method is used to represent the material behavior of the copper wire and the plastic strain distribution within the wire. It becomes clear that the MBD simulation is not sufficient for analyzing the process and the wire behavior in its entirety. Important parameters that define the final coil properties cannot be analyzed in the manner of a precise manifestation, e.g. the clearance between coil bobbin and wire as well as the wire deformation behavior in form of a diameter reduction which negatively affects the ohmic resistance. Finally, the numerical investigations are validated experimentally by linear winding tests.
Computational Methods for Identification, Optimization and Control of PDE Systems
2010-04-30
focused on the development of numerical methods and software specifically for the purpose of solving control, design, and optimization prob- lems where...that provide the foundations of simulation software must play an important role in any research of this type, the demands placed on numerical methods...y sus Aplicaciones , Ciudad de Cor- doba - Argentina, October 2007. 3. Inverse Problems in Deployable Space Structures, Fourth Conference on Inverse
Numerical approach to optimal portfolio in a power utility regime-switching model
NASA Astrophysics Data System (ADS)
Gyulov, Tihomir B.; Koleva, Miglena N.; Vulkov, Lubin G.
2017-12-01
We consider a system of weakly coupled degenerate semi-linear parabolic equations of optimal portfolio in a regime-switching with power utility function, derived by A.R. Valdez and T. Vargiolu [14]. First, we discuss some basic properties of the solution of this system. Then, we develop and analyze implicit-explicit, flux limited finite difference schemes for the differential problem. Numerical experiments are discussed.
Deterministic Design Optimization of Structures in OpenMDAO Framework
NASA Technical Reports Server (NTRS)
Coroneos, Rula M.; Pai, Shantaram S.
2012-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.
Translator for Optimizing Fluid-Handling Components
NASA Technical Reports Server (NTRS)
Landon, Mark; Perry, Ernest
2007-01-01
A software interface has been devised to facilitate optimization of the shapes of valves, elbows, fittings, and other components used to handle fluids under extreme conditions. This software interface translates data files generated by PLOT3D (a NASA grid-based plotting-and- data-display program) and by computational fluid dynamics (CFD) software into a format in which the files can be read by Sculptor, which is a shape-deformation- and-optimization program. Sculptor enables the user to interactively, smoothly, and arbitrarily deform the surfaces and volumes in two- and three-dimensional CFD models. Sculptor also includes design-optimization algorithms that can be used in conjunction with the arbitrary-shape-deformation components to perform automatic shape optimization. In the optimization process, the output of the CFD software is used as feedback while the optimizer strives to satisfy design criteria that could include, for example, improved values of pressure loss, velocity, flow quality, mass flow, etc.
Numerical Simulation of Callus Healing for Optimization of Fracture Fixation Stiffness
Steiner, Malte; Claes, Lutz; Ignatius, Anita; Simon, Ulrich; Wehner, Tim
2014-01-01
The stiffness of fracture fixation devices together with musculoskeletal loading defines the mechanical environment within a long bone fracture, and can be quantified by the interfragmentary movement. In vivo results suggested that this can have acceleratory or inhibitory influences, depending on direction and magnitude of motion, indicating that some complications in fracture treatment could be avoided by optimizing the fixation stiffness. However, general statements are difficult to make due to the limited number of experimental findings. The aim of this study was therefore to numerically investigate healing outcomes under various combinations of shear and axial fixation stiffness, and to detect the optimal configuration. A calibrated and established numerical model was used to predict fracture healing for numerous combinations of axial and shear fixation stiffness under physiological, superimposed, axial compressive and translational shear loading in sheep. Characteristic maps of healing outcome versus fixation stiffness (axial and shear) were created. The results suggest that delayed healing of 3 mm transversal fracture gaps will occur for highly flexible or very rigid axial fixation, which was corroborated by in vivo findings. The optimal fixation stiffness for ovine long bone fractures was predicted to be 1000–2500 N/mm in the axial and >300 N/mm in the shear direction. In summary, an optimized, moderate axial stiffness together with certain shear stiffness enhances fracture healing processes. The negative influence of one improper stiffness can be compensated by adjustment of the stiffness in the other direction. PMID:24991809
A reliable algorithm for optimal control synthesis
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1992-01-01
In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.
Numerical simulation of callus healing for optimization of fracture fixation stiffness.
Steiner, Malte; Claes, Lutz; Ignatius, Anita; Simon, Ulrich; Wehner, Tim
2014-01-01
The stiffness of fracture fixation devices together with musculoskeletal loading defines the mechanical environment within a long bone fracture, and can be quantified by the interfragmentary movement. In vivo results suggested that this can have acceleratory or inhibitory influences, depending on direction and magnitude of motion, indicating that some complications in fracture treatment could be avoided by optimizing the fixation stiffness. However, general statements are difficult to make due to the limited number of experimental findings. The aim of this study was therefore to numerically investigate healing outcomes under various combinations of shear and axial fixation stiffness, and to detect the optimal configuration. A calibrated and established numerical model was used to predict fracture healing for numerous combinations of axial and shear fixation stiffness under physiological, superimposed, axial compressive and translational shear loading in sheep. Characteristic maps of healing outcome versus fixation stiffness (axial and shear) were created. The results suggest that delayed healing of 3 mm transversal fracture gaps will occur for highly flexible or very rigid axial fixation, which was corroborated by in vivo findings. The optimal fixation stiffness for ovine long bone fractures was predicted to be 1000-2500 N/mm in the axial and >300 N/mm in the shear direction. In summary, an optimized, moderate axial stiffness together with certain shear stiffness enhances fracture healing processes. The negative influence of one improper stiffness can be compensated by adjustment of the stiffness in the other direction.
NASA Astrophysics Data System (ADS)
Pando, V.; García-Laguna, J.; San-José, L. A.
2012-11-01
In this article, we integrate a non-linear holding cost with a stock-dependent demand rate in a maximising profit per unit time model, extending several inventory models studied by other authors. After giving the mathematical formulation of the inventory system, we prove the existence and uniqueness of the optimal policy. Relying on this result, we can obtain the optimal solution using different numerical algorithms. Moreover, we provide a necessary and sufficient condition to determine whether a system is profitable, and we establish a rule to check when a given order quantity is the optimal lot size of the inventory model. The results are illustrated through numerical examples and the sensitivity of the optimal solution with respect to changes in some values of the parameters is assessed.
4D Cone-beam CT reconstruction using a motion model based on principal component analysis
Staub, David; Docef, Alen; Brock, Robert S.; Vaman, Constantin; Murphy, Martin J.
2011-01-01
Purpose: To provide a proof of concept validation of a novel 4D cone-beam CT (4DCBCT) reconstruction algorithm and to determine the best methods to train and optimize the algorithm. Methods: The algorithm animates a patient fan-beam CT (FBCT) with a patient specific parametric motion model in order to generate a time series of deformed CTs (the reconstructed 4DCBCT) that track the motion of the patient anatomy on a voxel by voxel scale. The motion model is constrained by requiring that projections cast through the deformed CT time series match the projections of the raw patient 4DCBCT. The motion model uses a basis of eigenvectors that are generated via principal component analysis (PCA) of a training set of displacement vector fields (DVFs) that approximate patient motion. The eigenvectors are weighted by a parameterized function of the patient breathing trace recorded during 4DCBCT. The algorithm is demonstrated and tested via numerical simulation. Results: The algorithm is shown to produce accurate reconstruction results for the most complicated simulated motion, in which voxels move with a pseudo-periodic pattern and relative phase shifts exist between voxels. The tests show that principal component eigenvectors trained on DVFs from a novel 2D/3D registration method give substantially better results than eigenvectors trained on DVFs obtained by conventionally registering 4DCBCT phases reconstructed via filtered backprojection. Conclusions: Proof of concept testing has validated the 4DCBCT reconstruction approach for the types of simulated data considered. In addition, the authors found the 2D/3D registration approach to be our best choice for generating the DVF training set, and the Nelder-Mead simplex algorithm the most robust optimization routine. PMID:22149852
Numerical optimization using flow equations.
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Numerical optimization using flow equations
NASA Astrophysics Data System (ADS)
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Numerical implementation of multiple peeling theory and its application to spider web anchorages.
Brely, Lucas; Bosia, Federico; Pugno, Nicola M
2015-02-06
Adhesion of spider web anchorages has been studied in recent years, including the specific functionalities achieved through different architectures. To better understand the delamination mechanisms of these and other biological or artificial fibrillar adhesives, and how their adhesion can be optimized, we develop a novel numerical model to simulate the multiple peeling of structures with arbitrary branching and adhesion angles, including complex architectures. The numerical model is based on a recently developed multiple peeling theory, which extends the energy-based single peeling theory of Kendall, and can be applied to arbitrarily complex structures. In particular, we numerically show that a multiple peeling problem can be treated as the superposition of single peeling configurations even for complex structures. Finally, we apply the developed numerical approach to study spider web anchorages, showing how their function is achieved through optimal geometrical configurations.
Numerical implementation of multiple peeling theory and its application to spider web anchorages
Brely, Lucas; Bosia, Federico; Pugno, Nicola M.
2015-01-01
Adhesion of spider web anchorages has been studied in recent years, including the specific functionalities achieved through different architectures. To better understand the delamination mechanisms of these and other biological or artificial fibrillar adhesives, and how their adhesion can be optimized, we develop a novel numerical model to simulate the multiple peeling of structures with arbitrary branching and adhesion angles, including complex architectures. The numerical model is based on a recently developed multiple peeling theory, which extends the energy-based single peeling theory of Kendall, and can be applied to arbitrarily complex structures. In particular, we numerically show that a multiple peeling problem can be treated as the superposition of single peeling configurations even for complex structures. Finally, we apply the developed numerical approach to study spider web anchorages, showing how their function is achieved through optimal geometrical configurations. PMID:25657835
Xu, Lang; Wang, Chuanxu; Li, Hui
2017-06-08
We focus on the impacts of technological spillovers and environmental awareness in a two-echelon supply chain with one-single supplier and one-single manufacturer to reduce carbon emission. In this supply chain, carbon abatement investment becomes one of key factors of cutting costs and improving profits, which is reducing production costs in the components and products-the investment from players in supply chain. On the basis of optimality theory, the centralized and decentralized models are respectively established to investigate the optimal decisions and profits. Further, setting the players' profits of the decentralized scenario as the disagreement points, we propose a bargaining-coordination contract through revenue-cost sharing to enhance the performance. Finally, by theoretical comparison and numerical analysis, the results show that: (i) The optimal profits of players and supply chain improve as technological spillovers and environmental awareness increase, and the profits of them in the bargaining-coordination contract are higher than that in the decentralized scenario; (ii) Technological spillovers between the players amplify the impact of "free-ride" behavior, in which the supplier always incentives the manufacturer to improve carbon emission intensity, but the cooperation will achieves and the profits will improve only when technological spillovers and environmental awareness are great; (iii) The contract can effectively achieve coordinated supply chain, and improve carbon abatement investment.
Geometric Design of Scalable Forward Scatterers for Optimally Efficient Solar Transformers.
Kim, Hye-Na; Vahidinia, Sanaz; Holt, Amanda L; Sweeney, Alison M; Yang, Shu
2017-11-01
It will be ideal to deliver equal, optimally efficient "doses" of sunlight to all cells in a photobioreactor system, while simultaneously utilizing the entire solar resource. Backed by the numerical scattering simulation and optimization, here, the design, synthesis, and characterization of the synthetic iridocytes that recapitulated the salient forward-scattering behavior of the Tridacnid clam system are reported, which presents the first geometric solution to allow narrow, precise forward redistribution of flux, utilizing the solar resource at the maximum quantum efficiency possible in living cells. The synthetic iridocytes are composed of silica nanoparticles in microspheres embedded in gelatin, both are low refractive index materials and inexpensive. They show wavelength selectivity, have little loss (the back-scattering intensity is reduced to less than ≈0.01% of the forward-scattered intensity), and narrow forward scattering cone similar to giant clams. Moreover, by comparing experiments and theoretical calculation, it is confirmed that the nonuniformity of the scatter sizes is a "feature not a bug" of the design, allowing for efficient, forward redistribution of solar flux in a micrometer-scaled paradigm. This method is environmentally benign, inexpensive, and scalable to produce optical components that will find uses in efficiency-limited solar conversion technologies, heat sinks, and biofuel production. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multi-disciplinary optimization of railway wheels
NASA Astrophysics Data System (ADS)
Nielsen, J. C. O.; Fredö, C. R.
2006-06-01
A numerical procedure for multi-disciplinary optimization of railway wheels, based on Design of Experiments (DOE) methodology and automated design, is presented. The target is a wheel design that meets the requirements for fatigue strength, while minimizing the unsprung mass and rolling noise. A 3-level full factorial (3LFF) DOE is used to collect data points required to set up Response Surface Models (RSM) relating design and response variables in the design space. Computationally efficient simulations are thereafter performed using the RSM to identify the solution that best fits the design target. A demonstration example, including four geometric design variables in a parametric finite element (FE) model, is presented. The design variables are wheel radius, web thickness, lateral offset between rim and hub, and radii at the transitions rim/web and hub/web, but more variables (including material properties) can be added if needed. To improve further the performance of the wheel design, a constrained layer damping (CLD) treatment is applied on the web. For a given load case, compared to a reference wheel design without CLD, a combination of wheel shape and damping optimization leads to the conclusion that a reduction in the wheel component of A-weighted rolling noise of 11 dB can be achieved if a simultaneous increase in wheel mass of 14 kg is accepted.
Yu, Jia; Yu, Zhichao; Tang, Chenlong
2016-07-04
The hot work environment of electronic components in the instrument cabin of spacecraft was researched, and a new thermal protection structure, namely graphite carbon foam, which is an impregnated phase-transition material, was adopted to implement the thermal control on the electronic components. We used the optimized parameters obtained from ANSYS to conduct 2D optimization, 3-D modeling and simulation, as well as the strength check. Finally, the optimization results were verified by experiments. The results showed that after optimization, the structured carbon-based energy-storing composite material could reduce the mass and realize the thermal control over electronic components. This phase-transition composite material still possesses excellent temperature control performance after its repeated melting and solidifying.
NASA Astrophysics Data System (ADS)
Sakai, K.; Watabe, D.; Minamidani, T.; Zhang, G. S.
2012-10-01
According to Godunov theorem for numerical calculations of advection equations, there exist no higher-order schemes with constant positive difference coefficients in a family of polynomial schemes with an accuracy exceeding the first-order. We propose a third-order computational scheme for numerical fluxes to guarantee the non-negative difference coefficients of resulting finite difference equations for advection-diffusion equations in a semi-conservative form, in which there exist two kinds of numerical fluxes at a cell surface and these two fluxes are not always coincident in non-uniform velocity fields. The present scheme is optimized so as to minimize truncation errors for the numerical fluxes while fulfilling the positivity condition of the difference coefficients which are variable depending on the local Courant number and diffusion number. The feature of the present optimized scheme consists in keeping the third-order accuracy anywhere without any numerical flux limiter. We extend the present method into multi-dimensional equations. Numerical experiments for advection-diffusion equations showed nonoscillatory solutions.
NASA Technical Reports Server (NTRS)
Allan, Brian G.; Owens, Lewis R.; Lin, John C.
2006-01-01
This research will investigate the use of Design-of-Experiments (DOE) in the development of an optimal passive flow control vane design for a boundary-layer-ingesting (BLI) offset inlet in transonic flow. This inlet flow control is designed to minimize the engine fan-face distortion levels and first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. Numerical simulations of the BLI inlet are computed using the Reynolds-averaged Navier-Stokes (RANS) flow solver, OVERFLOW, developed at NASA. These simulations are used to generate the numerical experiments for the DOE response surface model. In this investigation, two DOE optimizations were performed using a D-Optimal Response Surface model. The first DOE optimization was performed using four design factors which were vane height and angles-of-attack for two groups of vanes. One group of vanes was placed at the bottom of the inlet and a second group symmetrically on the sides. The DOE design was performed for a BLI inlet with a free-stream Mach number of 0.85 and a Reynolds number of 2 million, based on the length of the fan-face diameter, matching an experimental wind tunnel BLI inlet test. The first DOE optimization required a fifth order model having 173 numerical simulation experiments and was able to reduce the DC60 baseline distortion from 64% down to 4.4%, while holding the pressure recovery constant. A second DOE optimization was performed holding the vanes heights at a constant value from the first DOE optimization with the two vane angles-of-attack as design factors. This DOE only required a second order model fit with 15 numerical simulation experiments and reduced DC60 to 3.5% with small decreases in the fourth and fifth harmonic amplitudes. The second optimal vane design was tested at the NASA Langley 0.3- Meter Transonic Cryogenic Tunnel in a BLI inlet experiment. The experimental results showed a 80% reduction of DPCP(sub avg), the circumferential distortion level at the engine fan-face.
NASA Technical Reports Server (NTRS)
Allan, Brian G.; Owens, Lewis R., Jr.; Lin, John C.
2006-01-01
This research will investigate the use of Design-of-Experiments (DOE) in the development of an optimal passive flow control vane design for a boundary-layer-ingesting (BLI) offset inlet in transonic flow. This inlet flow control is designed to minimize the engine fan face distortion levels and first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. Numerical simulations of the BLI inlet are computed using the Reynolds-averaged Navier-Stokes (RANS) flow solver, OVERFLOW, developed at NASA. These simulations are used to generate the numerical experiments for the DOE response surface model. In this investigation, two DOE optimizations were performed using a D-Optimal Response Surface model. The first DOE optimization was performed using four design factors which were vane height and angles-of-attack for two groups of vanes. One group of vanes was placed at the bottom of the inlet and a second group symmetrically on the sides. The DOE design was performed for a BLI inlet with a free-stream Mach number of 0.85 and a Reynolds number of 2 million, based on the length of the fan face diameter, matching an experimental wind tunnel BLI inlet test. The first DOE optimization required a fifth order model having 173 numerical simulation experiments and was able to reduce the DC60 baseline distortion from 64% down to 4.4%, while holding the pressure recovery constant. A second DOE optimization was performed holding the vanes heights at a constant value from the first DOE optimization with the two vane angles-of-attack as design factors. This DOE only required a second order model fit with 15 numerical simulation experiments and reduced DC60 to 3.5% with small decreases in the fourth and fifth harmonic amplitudes. The second optimal vane design was tested at the NASA Langley 0.3-Meter Transonic Cryogenic Tunnel in a BLI inlet experiment. The experimental results showed a 80% reduction of DPCPavg, the circumferential distortion level at the engine fan face.
Finding undetected protein associations in cell signaling by belief propagation.
Bailly-Bechet, M; Borgs, C; Braunstein, A; Chayes, J; Dagkessamanskaia, A; François, J-M; Zecchina, R
2011-01-11
External information propagates in the cell mainly through signaling cascades and transcriptional activation, allowing it to react to a wide spectrum of environmental changes. High-throughput experiments identify numerous molecular components of such cascades that may, however, interact through unknown partners. Some of them may be detected using data coming from the integration of a protein-protein interaction network and mRNA expression profiles. This inference problem can be mapped onto the problem of finding appropriate optimal connected subgraphs of a network defined by these datasets. The optimization procedure turns out to be computationally intractable in general. Here we present a new distributed algorithm for this task, inspired from statistical physics, and apply this scheme to alpha factor and drug perturbations data in yeast. We identify the role of the COS8 protein, a member of a gene family of previously unknown function, and validate the results by genetic experiments. The algorithm we present is specially suited for very large datasets, can run in parallel, and can be adapted to other problems in systems biology. On renowned benchmarks it outperforms other algorithms in the field.
Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction
NASA Astrophysics Data System (ADS)
Mons, Vincent; Wang, Qi; Zaki, Tamer
2017-11-01
Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).
Thermo-optical Modelling of Laser Matter Interactions in Selective Laser Melting Processes.
NASA Astrophysics Data System (ADS)
Vinnakota, Raj; Genov, Dentcho
Selective laser melting (SLM) is one of the promising advanced manufacturing techniques, which is providing an ideal platform to manufacture components with zero geometric constraints. Coupling the electromagnetic and thermodynamic processes involved in the SLM, and developing the comprehensive theoretical model of the same is of great importance since it can provide significant improvements in the printing processes by revealing the optimal parametric space related to applied laser power, scan velocity, powder material, layer thickness and porosity. Here, we present a self-consistent Thermo-optical model which simultaneously solves the Maxwell's and the heat transfer equations and provides an insight into the electromagnetic energy released in the powder-beds and the concurrent thermodynamics of the particles temperature rise and onset of melting. The numerical calculations are compared with developed analytical model of the SLM process providing insight into the dynamics between laser facilitated Joule heating and radiation mitigated rise in temperature. These results provide guidelines toward improved energy efficiency and optimization of the SLM process scan rates. The current work is funded by the NSF EPSCoR CIMM project under award #OIA-1541079.
Transversely polarized sub-diffraction optical needle with ultra-long depth of focus
NASA Astrophysics Data System (ADS)
Guan, Jian; Lin, Jie; Chen, Chen; Ma, Yuan; Tan, Jiubin; Jin, Peng
2017-12-01
We generated purely transversely polarized sub-diffraction optical needles with ultra-long depth of focus (DOF) by focusing azimuthally polarized (AP) beams that were modulated by a vortex 0-2 π phase plate and binary phase diffraction optical elements (DOEs). The concentric belts' radii of the DOEs were optimized by a hybrid genetic particle swarm optimization (HGPSO) algorithm. For the focusing system with the numerical aperture (NA) of 0.95, an optical needle with the full width at half maximum (FWHM) of 0.40 λ and the DOF of 6.23 λ was generated. Similar optical needles were also generated by binary phase DOEs with different belts. The results demonstrated that the binary phase DOEs could achieve smaller FWHMs and longer DOFs simultaneously. The generated needles were circularly polarized on the z-axis and there were no longitudinally polarized components in the focal fields. The radius fabrication errors of a DOE have little effect on the optical needle produced by itself. The generated optical needles can be applied to the fields of photolithography, high-density optical data storage, microscope imaging and particle trapping.
Numerical Solution of Optimal Control Problem under SPDE Constraints
2011-10-14
Faure and Sobol sequences are used to evaluate high dimensional integrals, and the errors in the numerical results for over 30 dimensions become quite...sequence; right: 1000 points of dimension 26 and 27 projection for optimal Kronecker sequence. benchmark Faure and Sobol methods. 2.2 High order...J. Goodman and J. O’Rourke, Handbook of discrete and computational geome- try, CRC Press, Inc., (2004). [5] S. Joe and F. Kuo, Constructing Sobol
NASA Astrophysics Data System (ADS)
Rakshit, Suman; Khare, Swanand R.; Datta, Biswa Nath
2018-07-01
One of the most important yet difficult aspect of the Finite Element Model Updating Problem is to preserve the finite element inherited structures in the updated model. Finite element matrices are in general symmetric, positive definite (or semi-definite) and banded (tridiagonal, diagonal, penta-diagonal, etc.). Though a large number of papers have been published in recent years on various aspects of solutions of this problem, papers dealing with structure preservation almost do not exist. A novel optimization based approach that preserves the symmetric tridiagonal structures of the stiffness and damping matrices is proposed in this paper. An analytical expression for the global minimum solution of the associated optimization problem along with the results of numerical experiments obtained by both the analytical expressions and by an appropriate numerical optimization algorithm are presented. The results of numerical experiments support the validity of the proposed method.
3D early embryogenesis image filtering by nonlinear partial differential equations.
Krivá, Z; Mikula, K; Peyriéras, N; Rizzi, B; Sarti, A; Stasová, O
2010-08-01
We present nonlinear diffusion equations, numerical schemes to solve them and their application for filtering 3D images obtained from laser scanning microscopy (LSM) of living zebrafish embryos, with a goal to identify the optimal filtering method and its parameters. In the large scale applications dealing with analysis of 3D+time embryogenesis images, an important objective is a correct detection of the number and position of cell nuclei yielding the spatio-temporal cell lineage tree of embryogenesis. The filtering is the first and necessary step of the image analysis chain and must lead to correct results, removing the noise, sharpening the nuclei edges and correcting the acquisition errors related to spuriously connected subregions. In this paper we study such properties for the regularized Perona-Malik model and for the generalized mean curvature flow equations in the level-set formulation. A comparison with other nonlinear diffusion filters, like tensor anisotropic diffusion and Beltrami flow, is also included. All numerical schemes are based on the same discretization principles, i.e. finite volume method in space and semi-implicit scheme in time, for solving nonlinear partial differential equations. These numerical schemes are unconditionally stable, fast and naturally parallelizable. The filtering results are evaluated and compared first using the Mean Hausdorff distance between a gold standard and different isosurfaces of original and filtered data. Then, the number of isosurface connected components in a region of interest (ROI) detected in original and after the filtering is compared with the corresponding correct number of nuclei in the gold standard. Such analysis proves the robustness and reliability of the edge preserving nonlinear diffusion filtering for this type of data and lead to finding the optimal filtering parameters for the studied models and numerical schemes. Further comparisons consist in ability of splitting the very close objects which are artificially connected due to acquisition error intrinsically linked to physics of LSM. In all studied aspects it turned out that the nonlinear diffusion filter which is called geodesic mean curvature flow (GMCF) has the best performance. Copyright 2010 Elsevier B.V. All rights reserved.
ARRAY OPTIMIZATION FOR TIDAL ENERGY EXTRACTION IN A TIDAL CHANNEL – A NUMERICAL MODELING ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Wang, Taiping; Copping, Andrea
This paper presents an application of a hydrodynamic model to simulate tidal energy extraction in a tidal dominated estuary in the Pacific Northwest coast. A series of numerical experiments were carried out to simulate tidal energy extraction with different turbine array configurations, including location, spacing and array size. Preliminary model results suggest that array optimization for tidal energy extraction in a real-world site is a very complex process that requires consideration of multiple factors. Numerical models can be used effectively to assist turbine siting and array arrangement in a tidal turbine farm for tidal energy extraction.
Approach to numerical safety guidelines based on a core melt criterion. [PWR; BWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azarm, M.A.; Hall, R.E.
1982-01-01
A plausible approach is proposed for translating a single level criterion to a set of numerical guidelines. The criterion for core melt probability is used to set numerical guidelines for various core melt sequences, systems and component unavailabilities. These guidelines can be used as a means for making decisions regarding the necessity for replacing a component or improving part of a safety system. This approach is applied to estimate a set of numerical guidelines for various sequences of core melts that are analyzed in Reactor Safety Study for the Peach Bottom Nuclear Power Plant.
Di Molfetta, A; Santini, L; Forleo, G B; Minni, V; Mafhouz, K; Della Rocca, D G; Fresiello, L; Romeo, F; Ferrari, G
2012-01-01
In spite of cardiac resynchronization therapy (CRT) benefits, 25-30% of patients are still non responders. One of the possible reasons could be the non optimal atrioventricular (AV) and interventricular (VV) intervals settings. Our aim was to exploit a numerical model of cardiovascular system for AV and VV intervals optimization in CRT. A numerical model of the cardiovascular system CRT-dedicated was previously developed. Echocardiographic parameters, Systemic aortic pressure and ECG were collected in 20 consecutive patients before and after CRT. Patient data were simulated by the model that was used to optimize and set into the device the intervals at the baseline and at the follow up. The optimal AV and VV intervals were chosen to optimize the simulated selected variable/s on the base of both echocardiographic and electrocardiographic parameters. Intervals were different for each patient and in most cases, they changed at follow up. The model can well reproduce clinical data as verified with Bland Altman analysis and T-test (p > 0.05). Left ventricular remodeling was 38.7% and left ventricular ejection fraction increasing was 11% against the 15% and 6% reported in literature, respectively. The developed numerical model could reproduce patients conditions at the baseline and at the follow up including the CRT effects. The model could be used to optimize AV and VV intervals at the baseline and at the follow up realizing a personalized and dynamic CRT. A patient tailored CRT could improve patients outcome in comparison to literature data.
Optimal control design of turbo spin‐echo sequences with applications to parallel‐transmit systems
Hoogduin, Hans; Hajnal, Joseph V.; van den Berg, Cornelis A. T.; Luijten, Peter R.; Malik, Shaihan J.
2016-01-01
Purpose The design of turbo spin‐echo sequences is modeled as a dynamic optimization problem which includes the case of inhomogeneous transmit radiofrequency fields. This problem is efficiently solved by optimal control techniques making it possible to design patient‐specific sequences online. Theory and Methods The extended phase graph formalism is employed to model the signal evolution. The design problem is cast as an optimal control problem and an efficient numerical procedure for its solution is given. The numerical and experimental tests address standard multiecho sequences and pTx configurations. Results Standard, analytically derived flip angle trains are recovered by the numerical optimal control approach. New sequences are designed where constraints on radiofrequency total and peak power are included. In the case of parallel transmit application, the method is able to calculate the optimal echo train for two‐dimensional and three‐dimensional turbo spin echo sequences in the order of 10 s with a single central processing unit (CPU) implementation. The image contrast is maintained through the whole field of view despite inhomogeneities of the radiofrequency fields. Conclusion The optimal control design sheds new light on the sequence design process and makes it possible to design sequences in an online, patient‐specific fashion. Magn Reson Med 77:361–373, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine PMID:26800383
Computational experiments in the optimal slewing of flexible structures
NASA Technical Reports Server (NTRS)
Baker, T. E.; Polak, Lucian Elijah
1989-01-01
Numerical experiments on the problem of moving a flexible beam are discussed. An optimal control problem is formulated and transcribed into a form which can be solved using semi-infinite optimization techniques. All experiments were carried out on a SUN 3 microcomputer.
NASA Astrophysics Data System (ADS)
Marusak, Piotr M.; Kuntanapreeda, Suwat
2018-01-01
The paper considers application of a neural network based implementation of a model predictive control (MPC) control algorithm to electromechanical plants. Properties of such control plants implicate that a relatively short sampling time should be used. However, in such a case, finding the control value numerically may be too time-consuming. Therefore, the current paper tests the solution based on transforming the MPC optimization problem into a set of differential equations whose solution is the same as that of the original optimization problem. This set of differential equations can be interpreted as a dynamic neural network. In such an approach, the constraints can be introduced into the optimization problem with relative ease. Moreover, the solution of the optimization problem can be obtained faster than when the standard numerical quadratic programming routine is used. However, a very careful tuning of the algorithm is needed to achieve this. A DC motor and an electrohydraulic actuator are taken as illustrative examples. The feasibility and effectiveness of the proposed approach are demonstrated through numerical simulations.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-11-11
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.
Vogel, Michael W; Vegh, Viktor; Reutens, David C
2013-05-01
This paper investigates optimal placement of a localized single-axis magnetometer for ultralow field (ULF) relaxometry in view of various sample shapes and sizes. The authors used finite element method for the numerical analysis to determine the sample magnetic field environment and evaluate the optimal location of the single-axis magnetometer. Given the different samples, the authors analysed the magnetic field distribution around the sample and determined the optimal orientation and possible positions of the sensor to maximize signal strength, that is, the power of the free induction decay. The authors demonstrate that a glass vial with flat bottom and 10 ml volume is the best structure to achieve the highest signal out of samples studied. This paper demonstrates the importance of taking into account the combined effects of sensor configuration and sample parameters for signal generation prior to designing and constructing ULF systems with a single-axis magnetometer. Through numerical simulations the authors were able to optimize structural parameters, such as sample shape and size, sensor orientation and location, to maximize the measured signal in ultralow field relaxometry.
Optimal Power Flow in Multiphase Radial Networks with Delta Connections: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Changhong; Dall-Anese, Emiliano; Low, Steven H.
This paper focuses on multiphase radial distribution networks with mixed wye and delta connections, and proposes a semidefinite relaxation of the AC optimal power flow (OPF) problem. Two multiphase power-flow models are developed to facilitate the integration of delta-connected generation units/loads in the OPF problem. The first model extends traditional branch flow models - and it is referred to as extended branch flow model (EBFM). The second model leverages a linear relationship between per-phase power injections and delta connections, which holds under a balanced voltage approximation (BVA). Based on these models, pertinent OPF problems are formulated and relaxed to semidefinitemore » programs (SDPs). Numerical studies on IEEE test feeders show that SDP relaxations can be solved efficiently by a generic optimization solver. Numerical evidences indicate that solving the resultant SDP under BVA is faster than under EBFM. Moreover, both SDP solutions are numerically exact with respect to voltages and branch flows. It is also shown that the SDP solution under BVA has a small optimality gap, while the BVA model is accurate in the sense that it reflects actual system voltages.« less
Optimization-based additive decomposition of weakly coercive problems with applications
Bochev, Pavel B.; Ridzal, Denis
2016-01-27
In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less
Performance Optimization of Marine Science and Numerical Modeling on HPC Cluster
Yang, Dongdong; Yang, Hailong; Wang, Luming; Zhou, Yucong; Zhang, Zhiyuan; Wang, Rui; Liu, Yi
2017-01-01
Marine science and numerical modeling (MASNUM) is widely used in forecasting ocean wave movement, through simulating the variation tendency of the ocean wave. Although efforts have been devoted to improve the performance of MASNUM from various aspects by existing work, there is still large space unexplored for further performance improvement. In this paper, we aim at improving the performance of propagation solver and data access during the simulation, in addition to the efficiency of output I/O and load balance. Our optimizations include several effective techniques such as the algorithm redesign, load distribution optimization, parallel I/O and data access optimization. The experimental results demonstrate that our approach achieves higher performance compared to the state-of-the-art work, about 3.5x speedup without degrading the prediction accuracy. In addition, the parameter sensitivity analysis shows our optimizations are effective under various topography resolutions and output frequencies. PMID:28045972
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed.
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed. PMID:27926946
MIDACO on MINLP space applications
NASA Astrophysics Data System (ADS)
Schlueter, Martin; Erb, Sven O.; Gerdts, Matthias; Kemble, Stephen; Rückmann, Jan-J.
2013-04-01
A numerical study on two challenging mixed-integer non-linear programming (MINLP) space applications and their optimization with MIDACO, a recently developed general purpose optimization software, is presented. These applications are the optimal control of the ascent of a multiple-stage space launch vehicle and the space mission trajectory design from Earth to Jupiter using multiple gravity assists. Additionally, an NLP aerospace application, the optimal control of an F8 aircraft manoeuvre, is discussed and solved. In order to enhance the optimization performance of MIDACO a hybridization technique, coupling MIDACO with an SQP algorithm, is presented for two of these three applications. The numerical results show, that the applications can be solved to their best known solution (or even new best solution) in a reasonable time by the considered approach. Since using the concept of MINLP is still a novelty in the field of (aero)space engineering, the demonstrated capabilities are seen as very promising.
Design of materials with prescribed nonlinear properties
NASA Astrophysics Data System (ADS)
Wang, F.; Sigmund, O.; Jensen, J. S.
2014-09-01
We systematically design materials using topology optimization to achieve prescribed nonlinear properties under finite deformation. Instead of a formal homogenization procedure, a numerical experiment is proposed to evaluate the material performance in longitudinal and transverse tensile tests under finite deformation, i.e. stress-strain relations and Poissons ratio. By minimizing errors between actual and prescribed properties, materials are tailored to achieve the target. Both two dimensional (2D) truss-based and continuum materials are designed with various prescribed nonlinear properties. The numerical examples illustrate optimized materials with rubber-like behavior and also optimized materials with extreme strain-independent Poissons ratio for axial strain intervals of εi∈[0.00, 0.30].
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Mukhopadhyay, V.
1983-01-01
A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Mukhopadhyay, V.
1983-01-01
A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.
A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.; Garg, Devendra P.
1998-01-01
This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.
NASA Technical Reports Server (NTRS)
Becus, G. A.; Lui, C. Y.; Venkayya, V. B.; Tischler, V. A.
1987-01-01
A method for simultaneous structural and control design of large flexible space structures (LFSS) to reduce vibration generated by disturbances is presented. Desired natural frequencies and damping ratios for the closed loop system are achieved by using a combination of linear quadratic regulator (LQR) synthesis and numerical optimization techniques. The state and control weighing matrices (Q and R) are expressed in terms of structural parameters such as mass and stiffness. The design parameters are selected by numerical optimization so as to minimize the weight of the structure and to achieve the desired closed-loop eigenvalues. An illustrative example of the design of a two bar truss is presented.
Seghatchian, Jerard; Samama, Meyer Michel
2012-10-01
Massive transfusion (MT) is an empiric mode of treatment advocated for uncontrolled bleeding and massive haemorrhage, aiming at optimal resuscitation and aggressive correction of coagulopathy. Conventional guidelines recommend early administration of crystalloids and colloids in conjunction with red cells, where the red cell also plays a critical haemostatic function. Plasma and platelets are only used in patients with microvascular bleeding with PT/APTT values >1.5 times the normal values and if PLT counts are below 50×10(9)/L. Massive transfusion carries a significant mortality rate (40%), which increases with the number of volume expanders and blood components transfused. Controversies still exist over the optimal ratio of blood components with respect to overall clinical outcomes and collateral damage. While inadequate transfusion is believed to be associated with poor outcomes but empirical over transfusion results in unnecessary donor exposure with an increased rate of sepsis, transfusion overload and infusion of variable amounts of some biological response modifiers (BRMs), which have the potential to cause additional harm. Alternative strategies, such as early use of tranexamic acid are helpful. However in trauma settings the use of warm fresh whole blood (WFWB) instead of reconstituted components with a different ratio of stored components might be the most cost effective and safer option to improve the patient's survival rate and minimise collateral damage. This manuscript, after a brief summary of standard medical intervention in massive transfusion focuses on the main characteristics of various substances currently available to overcome massive transfusion coagulopathy. The relative levels of some BRMs in fresh and aged blood components of the same origin are highlighted and some myths and unresolved issues related to massive transfusion practice are discussed. In brief, the coagulopathy in MT is a complex phenomenon, often complicated by chronic activation of coagulation, platelets, complement and vascular endothelial cells, where haemolysis, microvesiculation, exposure of phosphatidyl serine positive cells, altered red cells with reduced adhesive proteins and the presence of some BRM, could play a pivotal role in the coagulopathy and untoward effects. The challenges of improving the safety of massive transfusion remain as numerous and as varied as ever. The answer may reside in appropriate studies on designer whole blood, combined with new innovative tools to diagnosis a coagulopathy and an evidence based mode of therapy to establish the optimal survival benefit of patients, always taking into account the concept of harm reduction and reduction of collateral damage. Copyright © 2012 Elsevier Ltd. All rights reserved.
Continuous Optimization on Constraint Manifolds
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1988-01-01
This paper demonstrates continuous optimization on the differentiable manifold formed by continuous constraint functions. The first order tensor geodesic differential equation is solved on the manifold in both numerical and closed analytic form for simple nonlinear programs. Advantages and disadvantages with respect to conventional optimization techniques are discussed.
Optimal time-domain technique for pulse width modulation in power electronics
NASA Astrophysics Data System (ADS)
Mayergoyz, I.; Tyagi, S.
2018-05-01
Optimal time-domain technique for pulse width modulation is presented. It is based on exact and explicit analytical solutions for inverter circuits, obtained for any sequence of input voltage rectangular pulses. Two optimal criteria are discussed and illustrated by numerical examples.
Optimization of processing parameters of UAV integral structural components based on yield response
NASA Astrophysics Data System (ADS)
Chen, Yunsheng
2018-05-01
In order to improve the overall strength of unmanned aerial vehicle (UAV), it is necessary to optimize the processing parameters of UAV structural components, which is affected by initial residual stress in the process of UAV structural components processing. Because machining errors are easy to occur, an optimization model for machining parameters of UAV integral structural components based on yield response is proposed. The finite element method is used to simulate the machining parameters of UAV integral structural components. The prediction model of workpiece surface machining error is established, and the influence of the path of walking knife on residual stress of UAV integral structure is studied, according to the stress of UAV integral component. The yield response of the time-varying stiffness is analyzed, and the yield response and the stress evolution mechanism of the UAV integral structure are analyzed. The simulation results show that this method is used to optimize the machining parameters of UAV integral structural components and improve the precision of UAV milling processing. The machining error is reduced, and the deformation prediction and error compensation of UAV integral structural parts are realized, thus improving the quality of machining.
NASA Astrophysics Data System (ADS)
Yang, Jia Sheng
2018-06-01
In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.
Optimal design of solidification processes
NASA Technical Reports Server (NTRS)
Dantzig, Jonathan A.; Tortorelli, Daniel A.
1991-01-01
An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.
Blended near-optimal tools for flexible water resources decision making
NASA Astrophysics Data System (ADS)
Rosenberg, David
2015-04-01
State-of-the-art systems analysis techniques focus on efficiently finding optimal solutions. Yet an optimal solution is optimal only for the static modelled issues and managers often seek near-optimal alternatives that address un-modelled or changing objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as performance within a tolerable deviation from the optimal objective function value and identified a few maximally-different alternatives that addressed select un-modelled issues. This paper presents new stratified, Monte Carlo Markov Chain sampling and parallel coordinate plotting tools that generate and communicate the structure and full extent of the near-optimal region to an optimization problem. Plot controls allow users to interactively explore region features of most interest. Controls also streamline the process to elicit un-modelled issues and update the model formulation in response to elicited issues. Use for a single-objective water quality management problem at Echo Reservoir, Utah identifies numerous and flexible practices to reduce the phosphorus load to the reservoir and maintain close-to-optimal performance. Compared to MGA, the new blended tools generate more numerous alternatives faster, more fully show the near-optimal region, help elicit a larger set of un-modelled issues, and offer managers greater flexibility to cope in a changing world.
Towards a metadata scheme for the description of materials - the description of microstructures
NASA Astrophysics Data System (ADS)
Schmitz, Georg J.; Böttger, Bernd; Apel, Markus; Eiken, Janin; Laschet, Gottfried; Altenfeld, Ralph; Berger, Ralf; Boussinot, Guillaume; Viardin, Alexandre
2016-01-01
The property of any material is essentially determined by its microstructure. Numerical models are increasingly the focus of modern engineering as helpful tools for tailoring and optimization of custom-designed microstructures by suitable processing and alloy design. A huge variety of software tools is available to predict various microstructural aspects for different materials. In the general frame of an integrated computational materials engineering (ICME) approach, these microstructure models provide the link between models operating at the atomistic or electronic scales, and models operating on the macroscopic scale of the component and its processing. In view of an improved interoperability of all these different tools it is highly desirable to establish a standardized nomenclature and methodology for the exchange of microstructure data. The scope of this article is to provide a comprehensive system of metadata descriptors for the description of a 3D microstructure. The presented descriptors are limited to a mere geometric description of a static microstructure and have to be complemented by further descriptors, e.g. for properties, numerical representations, kinetic data, and others in the future. Further attributes to each descriptor, e.g. on data origin, data uncertainty, and data validity range are being defined in ongoing work. The proposed descriptors are intended to be independent of any specific numerical representation. The descriptors defined in this article may serve as a first basis for standardization and will simplify the data exchange between different numerical models, as well as promote the integration of experimental data into numerical models of microstructures. An HDF5 template data file for a simple, three phase Al-Cu microstructure being based on the defined descriptors complements this article.
Towards a metadata scheme for the description of materials - the description of microstructures.
Schmitz, Georg J; Böttger, Bernd; Apel, Markus; Eiken, Janin; Laschet, Gottfried; Altenfeld, Ralph; Berger, Ralf; Boussinot, Guillaume; Viardin, Alexandre
2016-01-01
The property of any material is essentially determined by its microstructure. Numerical models are increasingly the focus of modern engineering as helpful tools for tailoring and optimization of custom-designed microstructures by suitable processing and alloy design. A huge variety of software tools is available to predict various microstructural aspects for different materials. In the general frame of an integrated computational materials engineering (ICME) approach, these microstructure models provide the link between models operating at the atomistic or electronic scales, and models operating on the macroscopic scale of the component and its processing. In view of an improved interoperability of all these different tools it is highly desirable to establish a standardized nomenclature and methodology for the exchange of microstructure data. The scope of this article is to provide a comprehensive system of metadata descriptors for the description of a 3D microstructure. The presented descriptors are limited to a mere geometric description of a static microstructure and have to be complemented by further descriptors, e.g. for properties, numerical representations, kinetic data, and others in the future. Further attributes to each descriptor, e.g. on data origin, data uncertainty, and data validity range are being defined in ongoing work. The proposed descriptors are intended to be independent of any specific numerical representation. The descriptors defined in this article may serve as a first basis for standardization and will simplify the data exchange between different numerical models, as well as promote the integration of experimental data into numerical models of microstructures. An HDF5 template data file for a simple, three phase Al-Cu microstructure being based on the defined descriptors complements this article.
Optimal control of HIV/AIDS dynamic: Education and treatment
NASA Astrophysics Data System (ADS)
Sule, Amiru; Abdullah, Farah Aini
2014-07-01
A mathematical model which describes the transmission dynamics of HIV/AIDS is developed. The optimal control representing education and treatment for this model is explored. The existence of optimal Control is established analytically by the use of optimal control theory. Numerical simulations suggest that education and treatment for the infected has a positive impact on HIV/AIDS control.
Numerical Modeling of Surface and Volumetric Cooling using Optimal T- and Y-shaped Flow Channels
NASA Astrophysics Data System (ADS)
Kosaraju, Srinivas
2017-11-01
The layout of T- and V-shaped flow channel networks on a surface can be optimized for minimum pressure drop and pumping power. The results of the optimization are in the form of geometric parameters such as length and diameter ratios of the stem and branch sections. While these flow channels are optimized for minimum pressure drop, they can also be used for surface and volumetric cooling applications such as heat exchangers, air conditioning and electronics cooling. In this paper, an effort has been made to study the heat transfer characteristics of multiple T- and Y-shaped flow channel configurations using numerical simulations. All configurations are subjected to same input parameters and heat generation constraints. Comparisons are made with similar results published in literature.
An approach of traffic signal control based on NLRSQP algorithm
NASA Astrophysics Data System (ADS)
Zou, Yuan-Yang; Hu, Yu
2017-11-01
This paper presents a linear program model with linear complementarity constraints (LPLCC) to solve traffic signal optimization problem. The objective function of the model is to obtain the minimization of total queue length with weight factors at the end of each cycle. Then, a combination algorithm based on the nonlinear least regression and sequence quadratic program (NLRSQP) is proposed, by which the local optimal solution can be obtained. Furthermore, four numerical experiments are proposed to study how to set the initial solution of the algorithm that can get a better local optimal solution more quickly. In particular, the results of numerical experiments show that: The model is effective for different arrival rates and weight factors; and the lower bound of the initial solution is, the better optimal solution can be obtained.
A multilevel control system for the large space telescope. [numerical analysis/optimal control
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Sundareshan, S. K.; Vukcevic, M. B.
1975-01-01
A multilevel scheme was proposed for control of Large Space Telescope (LST) modeled by a three-axis-six-order nonlinear equation. Local controllers were used on the subsystem level to stabilize motions corresponding to the three axes. Global controllers were applied to reduce (and sometimes nullify) the interactions among the subsystems. A multilevel optimization method was developed whereby local quadratic optimizations were performed on the subsystem level, and global control was again used to reduce (nullify) the effect of interactions. The multilevel stabilization and optimization methods are presented as general tools for design and then used in the design of the LST Control System. The methods are entirely computerized, so that they can accommodate higher order LST models with both conceptual and numerical advantages over standard straightforward design techniques.
Modified Newton-Raphson GRAPE methods for optimal control of spin systems
NASA Astrophysics Data System (ADS)
Goodwin, D. L.; Kuprov, Ilya
2016-05-01
Quadratic convergence throughout the active space is achieved for the gradient ascent pulse engineering (GRAPE) family of quantum optimal control algorithms. We demonstrate in this communication that the Hessian of the GRAPE fidelity functional is unusually cheap, having the same asymptotic complexity scaling as the functional itself. This leads to the possibility of using very efficient numerical optimization techniques. In particular, the Newton-Raphson method with a rational function optimization (RFO) regularized Hessian is shown in this work to require fewer system trajectory evaluations than any other algorithm in the GRAPE family. This communication describes algebraic and numerical implementation aspects (matrix exponential recycling, Hessian regularization, etc.) for the RFO Newton-Raphson version of GRAPE and reports benchmarks for common spin state control problems in magnetic resonance spectroscopy.
Multiple crack detection in 3D using a stable XFEM and global optimization
NASA Astrophysics Data System (ADS)
Agathos, Konstantinos; Chatzi, Eleni; Bordas, Stéphane P. A.
2018-02-01
A numerical scheme is proposed for the detection of multiple cracks in three dimensional (3D) structures. The scheme is based on a variant of the extended finite element method (XFEM) and a hybrid optimizer solution. The proposed XFEM variant is particularly well-suited for the simulation of 3D fracture problems, and as such serves as an efficient solution to the so-called forward problem. A set of heuristic optimization algorithms are recombined into a multiscale optimization scheme. The introduced approach proves effective in tackling the complex inverse problem involved, where identification of multiple flaws is sought on the basis of sparse measurements collected near the structural boundary. The potential of the scheme is demonstrated through a set of numerical case studies of varying complexity.
Prospective treatment planning to improve locoregional hyperthermia for oesophageal cancer.
Kok, H P; van Haaren, P M A; van de Kamer, J B; Zum Vörde Sive Vörding, P J; Wiersma, J; Hulshof, M C C M; Geijsen, E D; van Lanschot, J J B; Crezee, J
2006-08-01
In the Academic Medical Center (AMC) Amsterdam, locoregional hyperthermia for oesophageal tumours is applied using the 70 MHz AMC-4 phased array system. Due to the occurrence of treatment-limiting hot spots in normal tissue and systemic stress at high power, the thermal dose achieved in the tumour can be sub-optimal. The large number of degrees of freedom of the heating device, i.e. the amplitudes and phases of the antennae, makes it difficult to avoid treatment-limiting hot spots by intuitive amplitude/phase steering. Prospective hyperthermia treatment planning combined with high resolution temperature-based optimization was applied to improve hyperthermia treatment of patients with oesophageal cancer. All hyperthermia treatments were performed with 'standard' clinical settings. Temperatures were measured systemically, at the location of the tumour and near the spinal cord, which is an organ at risk. For 16 patients numerically optimized settings were obtained from treatment planning with temperature-based optimization. Steady state tumour temperatures were maximized, subject to constraints to normal tissue temperatures. At the start of 48 hyperthermia treatments in these 16 patients temperature rise (DeltaT) measurements were performed by applying a short power pulse with the numerically optimized amplitude/phase settings, with the clinical settings and with mixed settings, i.e. numerically optimized amplitudes combined with clinical phases. The heating efficiency of the three settings was determined by the measured DeltaT values and the DeltaT-ratio between the DeltaT in the tumour (DeltaToes) and near the spinal cord (DeltaTcord). For a single patient the steady state temperature distribution was computed retrospectively for all three settings, since the temperature distributions may be quite different. To illustrate that the choice of the optimization strategy is decisive for the obtained settings, a numerical optimization on DeltaT-ratio was performed for this patient and the steady state temperature distribution for the obtained settings was computed. A higher DeltaToes was measured with the mixed settings compared to the calculated and clinical settings; DeltaTcord was higher with the mixed settings compared to the clinical settings. The DeltaT-ratio was approximately 1.5 for all three settings. These results indicate that the most effective tumour heating can be achieved with the mixed settings. DeltaT is proportional to the Specific Absorption Rate (SAR) and a higher SAR results in a higher steady state temperature, which implies that mixed settings are likely to provide the most effective heating at steady state as well. The steady state temperature distributions for the clinical and mixed settings, computed for the single patient, showed some locations where temperatures exceeded the normal tissue constraints used in the optimization. This demonstrates that the numerical optimization did not prescribe the mixed settings, because it had to comply with the constraints set to the normal tissue temperatures. However, the predicted hot spots are not necessarily clinically relevant. Numerical optimization on DeltaT-ratio for this patient yielded a very high DeltaT-ratio ( approximately 380), albeit at the cost of excessive heating of normal tissue and lower steady state tumour temperatures compared to the conventional optimization. Treatment planning can be valuable to improve hyperthermia treatments. A thorough discussion on clinically relevant objectives and constraints is essential.
Comparative study of beam losses and heat loads reduction methods in MITICA beam source
NASA Astrophysics Data System (ADS)
Sartori, E.; Agostinetti, P.; Dal Bello, S.; Marcuzzi, D.; Serianni, G.; Sonato, P.; Veltri, P.
2014-02-01
In negative ion electrostatic accelerators a considerable fraction of extracted ions is lost by collision processes causing efficiency loss and heat deposition over the components. Stripping is proportional to the local density of gas, which is steadily injected in the plasma source; its pumping from the extraction and acceleration stages is a key functionality for the prototype of the ITER Neutral Beam Injector, and it can be simulated with the 3D code AVOCADO. Different geometric solutions were tested aiming at the reduction of the gas density. The parameter space considered is limited by constraints given by optics, aiming, voltage holding, beam uniformity, and mechanical feasibility. The guidelines of the optimization process are presented together with the proposed solutions and the results of numerical simulations.
The modeling of MMI structures for signal processing applications
NASA Astrophysics Data System (ADS)
Le, Thanh Trung; Cahill, Laurence W.
2008-02-01
Microring resonators are promising candidates for photonic signal processing applications. However, almost all resonators that have been reported so far use directional couplers or 2×2 multimode interference (MMI) couplers as the coupling element between the ring and the bus waveguides. In this paper, instead of using 2×2 couplers, novel structures for microring resonators based on 3×3 MMI couplers are proposed. The characteristics of the device are derived using the modal propagation method. The device parameters are optimized by using numerical methods. Optical switches and filters using Silicon on Insulator (SOI) then have been designed and analyzed. This device can become a new basic component for further applications in optical signal processing. The paper concludes with some further examples of photonic signal processing circuits based on MMI couplers.
NASA Astrophysics Data System (ADS)
Karimi, Milad; Moradlou, Fridoun; Hajipour, Mojtaba
2018-10-01
This paper is concerned with a backward heat conduction problem with time-dependent thermal diffusivity factor in an infinite "strip". This problem is drastically ill-posed which is caused by the amplified infinitely growth in the frequency components. A new regularization method based on the Meyer wavelet technique is developed to solve the considered problem. Using the Meyer wavelet technique, some new stable estimates are proposed in the Hölder and Logarithmic types which are optimal in the sense of given by Tautenhahn. The stability and convergence rate of the proposed regularization technique are proved. The good performance and the high-accuracy of this technique is demonstrated through various one and two dimensional examples. Numerical simulations and some comparative results are presented.
Gyrokinetic particle-in-cell optimization on emerging multi- and manycore platforms
Madduri, Kamesh; Im, Eun-Jin; Ibrahim, Khaled Z.; ...
2011-03-02
The next decade of high-performance computing (HPC) systems will see a rapid evolution and divergence of multi- and manycore architectures as power and cooling constraints limit increases in microprocessor clock speeds. Understanding efficient optimization methodologies on diverse multicore designs in the context of demanding numerical methods is one of the greatest challenges faced today by the HPC community. In this paper, we examine the efficient multicore optimization of GTC, a petascale gyrokinetic toroidal fusion code for studying plasma microturbulence in tokamak devices. For GTC’s key computational components (charge deposition and particle push), we explore efficient parallelization strategies across a broadmore » range of emerging multicore designs, including the recently-released Intel Nehalem-EX, the AMD Opteron Istanbul, and the highly multithreaded Sun UltraSparc T2+. We also present the first study on tuning gyrokinetic particle-in-cell (PIC) algorithms for graphics processors, using the NVIDIA C2050 (Fermi). Our work discusses several novel optimization approaches for gyrokinetic PIC, including mixed-precision computation, particle binning and decomposition strategies, grid replication, SIMDized atomic floating-point operations, and effective GPU texture memory utilization. Overall, we achieve significant performance improvements of 1.3–4.7× on these complex PIC kernels, despite the inherent challenges of data dependency and locality. Finally, our work also points to several architectural and programming features that could significantly enhance PIC performance and productivity on next-generation architectures.« less
Optimizing the well pumping rate and its distance from a stream
NASA Astrophysics Data System (ADS)
Abdel-Hafez, M. H.; Ogden, F. L.
2008-12-01
Both ground water and surface water are very important component of the water resources. Since they are coupled systems in riparian areas, management strategies that neglect interactions between them penalize senior surface water rights to the benefit of junior ground water rights holders in the prior appropriation rights system. Water rights managers face a problem in deciding which wells need to be shut down and when, in the case of depleted stream flow. A simulation model representing a combined hypothetical aquifer and stream has been developed using MODFLOW 2000 to capture parameter sensitivity, test management strategies and guide field data collection campaigns to support modeling. An optimization approach has been applied to optimize both the well distance from the stream and the maximum pumping rate that does not affect the stream discharge downstream the pumping wells. Conjunctive management can be modeled by coupling the numerical simulation model with the optimization techniques using the response matrix technique. The response matrix can be obtained by calculating the response coefficient for each well and stream. The main assumption of the response matrix technique is that the amount of water out of the stream to the aquifer is linearly proportional to the well pumping rate (Barlow et al. 2003). The results are presented in dimensionless form, which can be used by the water managers to solve conflicts between surface water and ground water holders by making the appropriate decision to choose which well need to be shut down first.
Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah
2017-01-01
Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.
Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah
2017-01-01
Flexible manufacturing system (FMS) enhances the firm’s flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs’ battery charge. Assessment of the numerical examples’ scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software. PMID:28263994
A three-term conjugate gradient method under the strong-Wolfe line search
NASA Astrophysics Data System (ADS)
Khadijah, Wan; Rivaie, Mohd; Mamat, Mustafa
2017-08-01
Recently, numerous studies have been concerned in conjugate gradient methods for solving large-scale unconstrained optimization method. In this paper, a three-term conjugate gradient method is proposed for unconstrained optimization which always satisfies sufficient descent direction and namely as Three-Term Rivaie-Mustafa-Ismail-Leong (TTRMIL). Under standard conditions, TTRMIL method is proved to be globally convergent under strong-Wolfe line search. Finally, numerical results are provided for the purpose of comparison.
NASA Astrophysics Data System (ADS)
Huang, Y. G.; Wang, L. G.; Lu, Y. L.; Chen, J. R.; Zhang, J. H.
2015-09-01
Based on the two-dimensional elasticity theory, this study established a mechanical model under chordally opposing distributed compressive loads, in order to perfect the theoretical foundation of the flattened Brazilian splitting test used for measuring the indirect tensile strength of rocks. The stress superposition method was used to obtain the approximate analytic solutions of stress components inside the flattened Brazilian disk. These analytic solutions were then verified through a comparison with the numerical results of the finite element method (FEM). Based on the theoretical derivation, this research carried out a contrastive study on the effect of the flattened loading angles on the stress value and stress concentration degree inside the disk. The results showed that the stress concentration degree near the loading point and the ratio of compressive/tensile stress inside the disk dramatically decreased as the flattened loading angle increased, avoiding the crushing failure near-loading point of Brazilian disk specimens. However, only the tensile stress value and the tensile region were slightly reduced with the increase of the flattened loading angle. Furthermore, this study found that the optimal flattened loading angle was 20°-30°; flattened load angles that were too large or too small made it difficult to guarantee the central tensile splitting failure principle of the Brazilian splitting test. According to the Griffith strength failure criterion, the calculative formula of the indirect tensile strength of rocks was derived theoretically. This study obtained a theoretical indirect tensile strength that closely coincided with existing and experimental results. Finally, this paper simulated the fracture evolution process of rocks under different loading angles through the use of the finite element numerical software ANSYS. The modeling results showed that the Flattened Brazilian Splitting Test using the optimal loading angle could guarantee the tensile splitting failure initiated by a central crack.
On the analytic and numeric optimisation of airplane trajectories under real atmospheric conditions
NASA Astrophysics Data System (ADS)
Gonzalo, J.; Domínguez, D.; López, D.
2014-12-01
From the beginning of aviation era, economic constraints have forced operators to continuously improve the planning of the flights. The revenue is proportional to the cost per flight and the airspace occupancy. Many methods, the first started in the middle of last century, have explore analytical, numerical and artificial intelligence resources to reach the optimal flight planning. In parallel, advances in meteorology and communications allow an almost real-time knowledge of the atmospheric conditions and a reliable, error-bounded forecast for the near future. Thus, apart from weather risks to be avoided, airplanes can dynamically adapt their trajectories to minimise their costs. International regulators are aware about these capabilities, so it is reasonable to envisage some changes to allow this dynamic planning negotiation to soon become operational. Moreover, current unmanned airplanes, very popular and often small, suffer the impact of winds and other weather conditions in form of dramatic changes in their performance. The present paper reviews analytic and numeric solutions for typical trajectory planning problems. Analytic methods are those trying to solve the problem using the Pontryagin principle, where influence parameters are added to state variables to form a split condition differential equation problem. The system can be solved numerically -indirect optimisation- or using parameterised functions -direct optimisation-. On the other hand, numerical methods are based on Bellman's dynamic programming (or Dijkstra algorithms), where the fact that two optimal trajectories can be concatenated to form a new optimal one if the joint point is demonstrated to belong to the final optimal solution. There is no a-priori conditions for the best method. Traditionally, analytic has been more employed for continuous problems whereas numeric for discrete ones. In the current problem, airplane behaviour is defined by continuous equations, while wind fields are given in a discrete grid at certain time intervals. The research demonstrates advantages and disadvantages of each method as well as performance figures of the solutions found for typical flight conditions under static and dynamic atmospheres. This provides significant parameters to be used in the selection of solvers for optimal trajectories.
ERIC Educational Resources Information Center
Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Kooij, Anita J.
2007-01-01
Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate…
Program to Optimize Simulated Trajectories (POST). Volume 1: Formulation manual
NASA Technical Reports Server (NTRS)
Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.
1975-01-01
A general purpose FORTRAN program for simulating and optimizing point mass trajectories (POST) of aerospace vehicles is described. The equations and the numerical techniques used in the program are documented. Topics discussed include: coordinate systems, planet model, trajectory simulation, auxiliary calculations, and targeting and optimization.
The number processing and calculation system: evidence from cognitive neuropsychology.
Salguero-Alcañiz, M P; Alameda-Bailén, J R
2015-04-01
Cognitive neuropsychology focuses on the concepts of dissociation and double dissociation. The performance of number processing and calculation tasks by patients with acquired brain injury can be used to characterise the way in which the healthy cognitive system manipulates number symbols and quantities. The objective of this study is to determine the components of the numerical processing and calculation system. Participants consisted of 6 patients with acquired brain injuries in different cerebral localisations. We used Batería de evaluación del procesamiento numérico y el cálculo, a battery assessing number processing and calculation. Data was analysed using the difference in proportions test. Quantitative numerical knowledge is independent from number transcoding, qualitative numerical knowledge, and calculation. Recodification is independent from qualitative numerical knowledge and calculation. Quantitative numerical knowledge and calculation are also independent functions. The number processing and calculation system comprises at least 4 components that operate independently: quantitative numerical knowledge, number transcoding, qualitative numerical knowledge, and calculation. Therefore, each one may be damaged selectively without affecting the functioning of another. According to the main models of number processing and calculation, each component has different characteristics and cerebral localisations. Copyright © 2013 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.
NASA Astrophysics Data System (ADS)
Chang, Anteng; Li, Huajun; Wang, Shuqing; Du, Junfeng
2017-08-01
Both wave-frequency (WF) and low-frequency (LF) components of mooring tension are in principle non-Gaussian due to nonlinearities in the dynamic system. This paper conducts a comprehensive investigation of applicable probability density functions (PDFs) of mooring tension amplitudes used to assess mooring-line fatigue damage via the spectral method. Short-term statistical characteristics of mooring-line tension responses are firstly investigated, in which the discrepancy arising from Gaussian approximation is revealed by comparing kurtosis and skewness coefficients. Several distribution functions based on present analytical spectral methods are selected to express the statistical distribution of the mooring-line tension amplitudes. Results indicate that the Gamma-type distribution and a linear combination of Dirlik and Tovo-Benasciutti formulas are suitable for separate WF and LF mooring tension components. A novel parametric method based on nonlinear transformations and stochastic optimization is then proposed to increase the effectiveness of mooring-line fatigue assessment due to non-Gaussian bimodal tension responses. Using time domain simulation as a benchmark, its accuracy is further validated using a numerical case study of a moored semi-submersible platform.
MolProbity: all-atom contacts and structure validation for proteins and nucleic acids
Davis, Ian W.; Leaver-Fay, Andrew; Chen, Vincent B.; Block, Jeremy N.; Kapral, Gary J.; Wang, Xueyi; Murray, Laura W.; Arendall, W. Bryan; Snoeyink, Jack; Richardson, Jane S.; Richardson, David C.
2007-01-01
MolProbity is a general-purpose web server offering quality validation for 3D structures of proteins, nucleic acids and complexes. It provides detailed all-atom contact analysis of any steric problems within the molecules as well as updated dihedral-angle diagnostics, and it can calculate and display the H-bond and van der Waals contacts in the interfaces between components. An integral step in the process is the addition and full optimization of all hydrogen atoms, both polar and nonpolar. New analysis functions have been added for RNA, for interfaces, and for NMR ensembles. Additionally, both the web site and major component programs have been rewritten to improve speed, convenience, clarity and integration with other resources. MolProbity results are reported in multiple forms: as overall numeric scores, as lists or charts of local problems, as downloadable PDB and graphics files, and most notably as informative, manipulable 3D kinemage graphics shown online in the KiNG viewer. This service is available free to all users at http://molprobity.biochem.duke.edu. PMID:17452350
Development of full wave code for modeling RF fields in hot non-uniform plasmas
NASA Astrophysics Data System (ADS)
Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo
2016-10-01
FAR-TECH, Inc. is developing a full wave RF modeling code to model RF fields in fusion devices and in general plasma applications. As an important component of the code, an adaptive meshless technique is introduced to solve the wave equations, which allows resolving plasma resonances efficiently and adapting to the complexity of antenna geometry and device boundary. The computational points are generated using either a point elimination method or a force balancing method based on the monitor function, which is calculated by solving the cold plasma dispersion equation locally. Another part of the code is the conductivity kernel calculation, used for modeling the nonlocal hot plasma dielectric response. The conductivity kernel is calculated on a coarse grid of test points and then interpolated linearly onto the computational points. All the components of the code are parallelized using MPI and OpenMP libraries to optimize the execution speed and memory. The algorithm and the results of our numerical approach to solving 2-D wave equations in a tokamak geometry will be presented. Work is supported by the U.S. DOE SBIR program.
Hrabovský, Miroslav
2014-01-01
The purpose of the study is to show a proposal of an extension of a one-dimensional speckle correlation method, which is primarily intended for determination of one-dimensional object's translation, for detection of general in-plane object's translation. In that view, a numerical simulation of a displacement of the speckle field as a consequence of general in-plane object's translation is presented. The translation components a x and a y representing the projections of a vector a of the object's displacement onto both x- and y-axes in the object plane (x, y) are evaluated separately by means of the extended one-dimensional speckle correlation method. Moreover, one can perform a distinct optimization of the method by reduction of intensity values representing detected speckle patterns. The theoretical relations between the translation components a x and a y of the object and the displacement of the speckle pattern for selected geometrical arrangement are mentioned and used for the testifying of the proposed method's rightness. PMID:24592180
Hybrid power supplies: A capacitor-assisted battery
NASA Astrophysics Data System (ADS)
Catherino, Henry A.; Burgel, Joseph F.; Shi, Peter L.; Rusek, Andrew; Zou, Xiulin
A hybrid electrochemical power supply is a concept that circumvents the need for designing any single power source to meet some extraordinary application requirement. A hybrid allows using components designed for near optimal operation without having to make unnecessary performance sacrifices. In many cases some additional synergistic effects appear. In this study, an electrochemical capacitor was employed as a power assist for a battery. An engine starting load was numerically modeled in the time domain and simulations were carried out. Actual measurements were then taken on the cranking of a diesel engine removed from a 5.0-tonne military truck and cranked in an environmental chamber. The cranking currents delivered by each power source were measured in the accessible current loops. This permitted the model parameters to be identified and, by doing that, studies using the analytical model demonstrated the merit of this hybrid application. The general system response of the battery/capacitor configuration was then modeled as a function of temperature. Doing this revealed electrical the interaction between the hybrid components. This study illustrates another case for advocating hybridized power systems.
Simulation of a class of hazardous situations in the ICS «INM RAS - Baltic Sea»
NASA Astrophysics Data System (ADS)
Zakharova, Natalia; Agoshkov, Valery; Aseev, Nikita; Parmuzin, Eugene; Sheloput, Tateana; Shutyaev, Victor
2017-04-01
Development of Informational Computational Systems (ICS) for data assimilation procedures is one of multidisciplinary problems. To study and solve these problems one needs to apply modern results from different disciplines and recent developments in mathematical modeling, theory of adjoint equations and optimal control, inverse problems, numerical methods theory, numerical algebra, scientific computing and processing of satellite data. In this work the results on the ICS development for PC-ICS "INM RAS - Baltic Sea" are presented. We discuss practical problems studied by ICS. The System includes numerical model of the Baltic Sea thermodynamics, the new oil spill model describing the propagation of a slick at the Sea surface (Agoshkov, Aseev et al., 2014) and the optimal ship route calculating block (Agoshkov, Zayachkovsky et al., 2014). The ICS is based on the INMOM numerical model of the Baltic Sea thermodynamics (Zalesny et al., 2013). It is possible to calculate main hydrodynamic parameters (temperature, salinity, velocities, sea level) using user-friendly interface of the ICS. The System includes data assimilation procedures (Agoshkov, 2003, Parmuzin, Agoshkov, 2012) and one can use the block of variational assimilation of the sea surface temperature in order to obtain main hydrodynamic parameters. Main possibilities of the ICS and several numerical experiments are presented in the work. By the problem of risk control is meant a problem of determination of optimal resources quantity which are necessary for decreasing the risk to some acceptable value. Mass of oil slick is chosen as a function of control. For the realization of the random variable the quadratic "functional of cost" is introduced. It comprises cleaning costs and deviation of damage of oil pollution from its acceptable value. The problem of minimization of this functional is solved based on the methods of optimal control and the theory of adjoint equations. The solution of this problem is explicitly found. The study was supported by the Russian Foundation for Basic Research (project 16-31-00510) and by the Russian Science Foundation (project №14-11-00609). V. I. Agoshkov, Methods of Optimal Control and Adjoint Equations in Problems of Mathematical Physics. INM RAS, Moscow, 2003 (in Russian). V. B. Zalesny, A. V. Gusev, V. O. Ivchenko, R. Tamsalu, and R. Aps, Numerical model of the Baltic Sea circulation. Russ. J. Numer. Anal. Math. Modelling 28 (2013), No. 1, 85-100. V.I. Agoshkov, A.O. Zayachkovskiy, R. Aps, P. Kujala, and J. Rytkönen. Risk theory based solution to the problem of optimal vessel route // Russian Journal of Numerical Analysis and Mathematical Modelling. 2014. Volume 29, Issue 2, Pages 69-78. Agoshkov, V., Aseev, N., Aps, R., Kujala, P., Rytkönen, J., Zalesny, V. The problem of control of oil pollution risk in the Baltic Sea // Russian Journal of Numerical Analysis and Mathematical Modelling. 2014. Volume 29, Issue 2, Pages 93-105. E. I. Parmuzin and V. I. Agoshkov, Numerical solution of the variational assimilation problem for sea surface temperature in the model of the Black Sea dynamics. Russ. J. Numer. Anal. Math. Modelling 27 (2012), No. 1, 69-94. Olof Liungman and Johan Mattsson. Scientic Documentation of Seatrack Web; physical processes, algorithms and references, 2011.
Holistic irrigation water management approach based on stochastic soil water dynamics
NASA Astrophysics Data System (ADS)
Alizadeh, H.; Mousavi, S. J.
2012-04-01
Appreciating the essential gap between fundamental unsaturated zone transport processes and soil and water management due to low effectiveness of some of monitoring and modeling approaches, this study presents a mathematical programming model for irrigation management optimization based on stochastic soil water dynamics. The model is a nonlinear non-convex program with an economic objective function to address water productivity and profitability aspects in irrigation management through optimizing irrigation policy. Utilizing an optimization-simulation method, the model includes an eco-hydrological integrated simulation model consisting of an explicit stochastic module of soil moisture dynamics in the crop-root zone with shallow water table effects, a conceptual root-zone salt balance module, and the FAO crop yield module. Interdependent hydrology of soil unsaturated and saturated zones is treated in a semi-analytical approach in two steps. At first step analytical expressions are derived for the expected values of crop yield, total water requirement and soil water balance components assuming fixed level for shallow water table, while numerical Newton-Raphson procedure is employed at the second step to modify value of shallow water table level. Particle Swarm Optimization (PSO) algorithm, combined with the eco-hydrological simulation model, has been used to solve the non-convex program. Benefiting from semi-analytical framework of the simulation model, the optimization-simulation method with significantly better computational performance compared to a numerical Mote-Carlo simulation-based technique has led to an effective irrigation management tool that can contribute to bridging the gap between vadose zone theory and water management practice. In addition to precisely assessing the most influential processes at a growing season time scale, one can use the developed model in large scale systems such as irrigation districts and agricultural catchments. Accordingly, the model has been applied in Dasht-e-Abbas and Ein-khosh Fakkeh Irrigation Districts (DAID and EFID) of the Karkheh Basin in southwest of Iran. The area suffers from the water scarcity problem and therefore the trade-off between the level of deficit and economical profit should be assessed. Based on the results, while the maximum net benefit has been obtained for the stress-avoidance (SA) irrigation policy, the highest water profitability, defined by economical net benefit gained from unit irrigation water volume application, has been resulted when only about 60% of water used in the SA policy is applied.
NASA Astrophysics Data System (ADS)
Castagnède, Bernard; Jenkins, James T.; Sachse, Wolfgang; Baste, Stéphane
1990-03-01
A method is described to optimally determine the elastic constants of anisotropic solids from wave-speeds measurements in arbitrary nonprincipal planes. For such a problem, the characteristic equation is a degree-three polynomial which generally does not factorize. By developing and rearranging this polynomial, a nonlinear system of equations is obtained. The elastic constants are then recovered by minimizing a functional derived from this overdetermined system of equations. Calculations of the functional are given for two specific cases, i.e., the orthorhombic and the hexagonal symmetries. Some numerical results showing the efficiency of the algorithm are presented. A numerical method is also described for the recovery of the orientation of the principal acoustical axes. This problem is solved through a double-iterative numerical scheme. Numerical as well as experimental results are presented for a unidirectional composite material.
Large Black Holes in the Randall-Sundrum II Model
NASA Astrophysics Data System (ADS)
Yaghoobpour Tari, Shima
The Einstein equation with a negative cosmological constant ! in the five dimensions for the Randall-Sundrum II model, which includes a black hole, has been solved numerically. We have constructed an AdS5-CFT 4 solution numerically, using a spectral method to minimize the integral of the square of the error of the Einstein equation, with 210 parameters to be determined by optimization. This metric is conformal to the Schwarzschild metric at an AdS5 boundary with an infinite scale factor. So, we consider this solution as an infinite-mass black hole solution. We have rewritten the infinite-mass black hole in the Fefferman-Graham form and obtained the numerical components of the CFT energy-momentum tensor. Using them, we have perturbed the metric to relocate the brane from infinity and obtained a large static black hole solution for the Randall- Sundrum II model. The changes of mass, entropy, temperature and area of the large black hole from the Schwarzschild metric are studied up to the first order for the perturbation parameter 1/(-Λ5M 2). The Hawking temperature and entropy for our large black hole have the same values as the Schwarzschild metric with the same mass, but the horizon area is increased by about 4.7/(-Λ5). Figueras, Lucietti, and Wiseman found an AdS5-CFT4 solution using an independent and different method from us, called the Ricci-DeTurck-flow method. Then, Figueras and Wiseman perturbed this solution in a same way as we have done and obtained the solution for the large black hole in the Randall-Sundrum II model. These two numerical solutions are the first mathematical proofs for having a large black hole in the Randall-Sundrum II. We have compared their results with ours for the CFT energy-momentum tensor components and the perturbed metric. We have shown that the results are closely in agreement, which can be considered as evidence that the solution for the large black hole in the Randall-Sundrum II model exists.
Numerical analysis of tailored sheets to improve the quality of components made by SPIF
NASA Astrophysics Data System (ADS)
Gagliardi, Francesco; Ambrogio, Giuseppina; Cozza, Anna; Pulice, Diego; Filice, Luigino
2018-05-01
In this paper, the authors pointed out a study on the profitable combination of forming techniques. More in detail, the attention has been put on the combination of the single point incremental forming (SPIF) and, generally, speaking, of an additional process that can lead to a material thickening on the initial blank considering the local thinning which the sheets undergo at. Focalizing the attention of the research on the excessive thinning of parts made by SPIF, a hybrid approach can be thought as a viable solution to reduce the not homogeneous thickness distribution of the sheet. In fact, the basic idea is to work on a blank previously modified by a deformation step performed, for instance, by forming, additive or subtractive processes. To evaluate the effectiveness of this hybrid solution, a FE numerical model has been defined to analyze the thickness variation on tailored sheets incrementally formed optimizing the material distribution according to the shape to be manufactured. Simulations based on the explicit formulation have been set up for the model implementation. The mechanical properties of the sheet material have been taken in literature and a frustum of cone as benchmark profile has been considered for the performed analysis. The outcomes of numerical model have been evaluated in terms of both maximum thinning and final thickness distribution. The feasibility of the proposed approach will be deeply detailed in the paper.
NASA Astrophysics Data System (ADS)
Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper
2016-04-01
Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.
Experimental and numerical investigation of centrifugal pumps with asymmetric inflow conditions
NASA Astrophysics Data System (ADS)
Mittag, Sten; Gabi, Martin
2015-11-01
Most of the times pumps operate off best point states. Reasons are changes of operating conditions, modifications, pollution and wearout or erosion. As consequences non-rotational symmetric flows, transient operational conditions, increased risk of cavitation, decrease of efficiency and unpredictable wearout can appear. Especially construction components of centrifugal pumps, in particular intake elbows, contribute to this matter. Intake elbows causes additional losses and secondary flows, hence non-rotational velocity distributions as intake profile to the centrifugal pump. As a result the impeller vanes experience permanent changes of the intake flow angle and with it transient flow conditions in the blade channels. This paper presents the first results of a project, experimentally and numerically investigating the consequences of non-rotational inflow to leading edge flow conditions of a centrifugal pump. Therefore two pumpintake- elbow systems are compared, by only altering the intake elbow geometry: a common single bended 90° elbow and a numerically optimized elbow (improved regarding rotational symmetric inflow conditions and friction coefficient). The experiments are carried out, using time resolved stereoscopic PIV on a full acrylic pump with refractions index matched (RIM) working fluid. This allows transient investigations of the flow field simultaneously for all blade leading edges. Additional CFD results are validated and used to further support the investigation i.e. for comparing an analog pump system with ideal inflow conditions.
Dynamics of anchor last deployment of submersible buoy system
NASA Astrophysics Data System (ADS)
Zheng, Zhongqiang; Xu, Jianpeng; Huang, Peng; Wang, Lei; Yang, Xiaoguang; Chang, Zongyu
2016-02-01
Submersible buoy systems are widely used for oceanographic research, ocean engineering and coastal defense. Severe sea environment has obvious effects on the dynamics of submersible buoy systems. Huge tension can occur and may cause the snap of cables, especially during the deployment period. This paper studies the deployment dynamics of submersible buoy systems with numerical and experimental methods. By applying the lumped mass approach, a three-dimensional multi-body model of submersible buoy system is developed considering the hydrodynamic force, tension force and impact force between components of submersible buoy system and seabed. Numerical integration method is used to solve the differential equations. The simulation output includes tension force, trajectory, profile and dropping location and impact force of submersible buoys. In addition, the deployment experiment of a simplified submersible buoy model was carried out. The profile and different nodes' velocities of the submersible buoy are obtained. By comparing the results of the two methods, it is found that the numerical model well simulates the actual process and conditions of the experiment. The simulation results agree well with the results of the experiment such as gravity anchor's location and velocities of different nodes of the submersible buoy. The study results will help to understand the conditions of submersible buoy's deployment, operation and recovery, and can be used to guide the design and optimization of the system.
Integrating prediction, provenance, and optimization into high energy workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schram, M.; Bansal, V.; Friese, R. D.
We propose a novel approach for efficient execution of workflows on distributed resources. The key components of this framework include: performance modeling to quantitatively predict workflow component behavior; optimization-based scheduling such as choosing an optimal subset of resources to meet demand and assignment of tasks to resources; distributed I/O optimizations such as prefetching; and provenance methods for collecting performance data. In preliminary results, these techniques improve throughput on a small Belle II workflow by 20%.
Numerical Leak Detection in a Pipeline Network of Complex Structure with Unsteady Flow
NASA Astrophysics Data System (ADS)
Aida-zade, K. R.; Ashrafova, E. R.
2017-12-01
An inverse problem for a pipeline network of complex loopback structure is solved numerically. The problem is to determine the locations and amounts of leaks from unsteady flow characteristics measured at some pipeline points. The features of the problem include impulse functions involved in a system of hyperbolic differential equations, the absence of classical initial conditions, and boundary conditions specified as nonseparated relations between the states at the endpoints of adjacent pipeline segments. The problem is reduced to a parametric optimal control problem without initial conditions, but with nonseparated boundary conditions. The latter problem is solved by applying first-order optimization methods. Results of numerical experiments are presented.
Design and development of conformal antenna composite structure
NASA Astrophysics Data System (ADS)
Xie, Zonghong; Zhao, Wei; Zhang, Peng; Li, Xiang
2017-09-01
In the manufacturing process of the common smart skin antenna, the adhesive covered on the radiating elements of the antenna led to severe deviation of the resonant frequency, which degraded the electromagnetic performance of the antenna. In this paper, a new component called package cover was adopted to prevent the adhesive from covering on the radiating elements of the microstrip antenna array. The package cover and the microstrip antenna array were bonded together as packaged antenna which was then embedded into the composite sandwich structure to develop a new structure called conformal antenna composite structure (CACS). The geometric parameters of the microstrip antenna array and the CACS were optimized by the commercial software CST microwave studio. According to the optimal results, the microstrip antenna array and the CACS were manufactured and tested. The experimental and numerical results of electromagnetic performance showed that the resonant frequency of the CACS was close to that of the microstrip antenna array (with error less than 1%) and the CACS had a higher gain (about 2 dB) than the microstrip antenna array. The package system would increase the electromagnetic radiating energy at the design frequency nearly 66%. The numerical model generated by CST microwave studio in this study could successfully predict the electromagnetic performance of the microstrip antenna array and the CACS with relatively good accuracy. The mechanical analysis results showed that the CACS had better flexural property than the composite sandwich structure without the embedment of packaged antenna. The comparison of the electromagnetic performance for the CACS and the MECSSA showed that the package system was useful and effective.
Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2017-05-01
The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
NASA Astrophysics Data System (ADS)
Kowalski, Dariusz; Grzyl, Beata; Kristowski, Adam
2017-09-01
Steel materials, due to their numerous advantages - high availability, easiness of processing and possibility of almost any shaping are commonly applied in construction for carrying out basic carrier systems and auxiliary structures. However, the major disadvantage of this material is its high corrosion susceptibility, which depends strictly on the local conditions of the facility and the applied type of corrosion protection system. The paper presents an analysis of life cycle costs of structures installed on bridges used in the road lane conditions. Three anti-corrosion protection systems were considered, analyzing their essential cost components. The possibility of reducing significantly the costs associated with anti-corrosion protection at the stage of steel barriers maintenance over a period of 30 years has been indicated. The possibility of using a new approach based on the life cycle cost estimation in the anti-corrosion protection of steel elements is presented. The relationship between the method of steel barrier protection, the scope of repair, renewal work and costs is shown. The article proposes an optimal solution which, while reducing the cost of maintenance of road infrastructure components in the area of corrosion protection, allows to maintain certain safety standards for steel barriers that are installed on the bridge.
NASA Astrophysics Data System (ADS)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; Govind, Niranjan; Yang, Chao
2017-12-01
We present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.
Investigations of quantum heuristics for optimization
NASA Astrophysics Data System (ADS)
Rieffel, Eleanor; Hadfield, Stuart; Jiang, Zhang; Mandra, Salvatore; Venturelli, Davide; Wang, Zhihui
We explore the design of quantum heuristics for optimization, focusing on the quantum approximate optimization algorithm, a metaheuristic developed by Farhi, Goldstone, and Gutmann. We develop specific instantiations of the of quantum approximate optimization algorithm for a variety of challenging combinatorial optimization problems. Through theoretical analyses and numeric investigations of select problems, we provide insight into parameter setting and Hamiltonian design for quantum approximate optimization algorithms and related quantum heuristics, and into their implementation on hardware realizable in the near term.
Jig-Shape Optimization of a Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-gi
2018-01-01
A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on in-house object-oriented optimization tool.
NASA Astrophysics Data System (ADS)
Rosenberg, David E.
2015-04-01
State-of-the-art systems analysis techniques focus on efficiently finding optimal solutions. Yet an optimal solution is optimal only for the modeled issues and managers often seek near-optimal alternatives that address unmodeled objectives, preferences, limits, uncertainties, and other issues. Early on, Modeling to Generate Alternatives (MGA) formalized near-optimal as performance within a tolerable deviation from the optimal objective function value and identified a few maximally different alternatives that addressed some unmodeled issues. This paper presents new stratified, Monte-Carlo Markov Chain sampling and parallel coordinate plotting tools that generate and communicate the structure and extent of the near-optimal region to an optimization problem. Interactive plot controls allow users to explore region features of most interest. Controls also streamline the process to elicit unmodeled issues and update the model formulation in response to elicited issues. Use for an example, single-objective, linear water quality management problem at Echo Reservoir, Utah, identifies numerous and flexible practices to reduce the phosphorus load to the reservoir and maintain close-to-optimal performance. Flexibility is upheld by further interactive alternative generation, transforming the formulation into a multiobjective problem, and relaxing the tolerance parameter to expand the near-optimal region. Compared to MGA, the new blended tools generate more numerous alternatives faster, more fully show the near-optimal region, and help elicit a larger set of unmodeled issues.
Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A
2012-07-02
Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of generic distributed biological systems.
Marčan, Marija; Pavliha, Denis; Kos, Bor; Forjanič, Tadeja; Miklavčič, Damijan
2015-01-01
Treatments based on electroporation are a new and promising approach to treating tumors, especially non-resectable ones. The success of the treatment is, however, heavily dependent on coverage of the entire tumor volume with a sufficiently high electric field. Ensuring complete coverage in the case of deep-seated tumors is not trivial and can in best way be ensured by patient-specific treatment planning. The basis of the treatment planning process consists of two complex tasks: medical image segmentation, and numerical modeling and optimization. In addition to previously developed segmentation algorithms for several tissues (human liver, hepatic vessels, bone tissue and canine brain) and the algorithms for numerical modeling and optimization of treatment parameters, we developed a web-based tool to facilitate the translation of the algorithms and their application in the clinic. The developed web-based tool automatically builds a 3D model of the target tissue from the medical images uploaded by the user and then uses this 3D model to optimize treatment parameters. The tool enables the user to validate the results of the automatic segmentation and make corrections if necessary before delivering the final treatment plan. Evaluation of the tool was performed by five independent experts from four different institutions. During the evaluation, we gathered data concerning user experience and measured performance times for different components of the tool. Both user reports and performance times show significant reduction in treatment-planning complexity and time-consumption from 1-2 days to a few hours. The presented web-based tool is intended to facilitate the treatment planning process and reduce the time needed for it. It is crucial for facilitating expansion of electroporation-based treatments in the clinic and ensuring reliable treatment for the patients. The additional value of the tool is the possibility of easy upgrade and integration of modules with new functionalities as they are developed.
2015-01-01
Background Treatments based on electroporation are a new and promising approach to treating tumors, especially non-resectable ones. The success of the treatment is, however, heavily dependent on coverage of the entire tumor volume with a sufficiently high electric field. Ensuring complete coverage in the case of deep-seated tumors is not trivial and can in best way be ensured by patient-specific treatment planning. The basis of the treatment planning process consists of two complex tasks: medical image segmentation, and numerical modeling and optimization. Methods In addition to previously developed segmentation algorithms for several tissues (human liver, hepatic vessels, bone tissue and canine brain) and the algorithms for numerical modeling and optimization of treatment parameters, we developed a web-based tool to facilitate the translation of the algorithms and their application in the clinic. The developed web-based tool automatically builds a 3D model of the target tissue from the medical images uploaded by the user and then uses this 3D model to optimize treatment parameters. The tool enables the user to validate the results of the automatic segmentation and make corrections if necessary before delivering the final treatment plan. Results Evaluation of the tool was performed by five independent experts from four different institutions. During the evaluation, we gathered data concerning user experience and measured performance times for different components of the tool. Both user reports and performance times show significant reduction in treatment-planning complexity and time-consumption from 1-2 days to a few hours. Conclusions The presented web-based tool is intended to facilitate the treatment planning process and reduce the time needed for it. It is crucial for facilitating expansion of electroporation-based treatments in the clinic and ensuring reliable treatment for the patients. The additional value of the tool is the possibility of easy upgrade and integration of modules with new functionalities as they are developed. PMID:26356007
Computation of rare transitions in the barotropic quasi-geostrophic equations
NASA Astrophysics Data System (ADS)
Laurie, Jason; Bouchet, Freddy
2015-01-01
We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.
Cheng, Yung-Chang; Lin, Deng-Huei; Jiang, Cho-Pei; Lin, Yuan-Min
2017-05-01
This study proposes a new methodology for dental implant customization consisting of numerical geometric optimization and 3-dimensional printing fabrication of zirconia ceramic. In the numerical modeling, exogenous factors for implant shape include the thread pitch, thread depth, maximal diameter of implant neck, and body size. Endogenous factors are bone density, cortical bone thickness, and non-osseointegration. An integration procedure, including uniform design method, Kriging interpolation and genetic algorithm, is applied to optimize the geometry of dental implants. The threshold of minimal micromotion for optimization evaluation was 100 μm. The optimized model is imported to the 3-dimensional slurry printer to fabricate the zirconia green body (powder is bonded by polymer weakly) of the implant. The sintered implant is obtained using a 2-stage sintering process. Twelve models are constructed according to uniform design method and simulated the micromotion behavior using finite element modeling. The result of uniform design models yields a set of exogenous factors that can provide the minimal micromotion (30.61 μm), as a suitable model. Kriging interpolation and genetic algorithm modified the exogenous factor of the suitable model, resulting in 27.11 μm as an optimization model. Experimental results show that the 3-dimensional slurry printer successfully fabricated the green body of the optimization model, but the accuracy of sintered part still needs to be improved. In addition, the scanning electron microscopy morphology is a stabilized t-phase microstructure, and the average compressive strength of the sintered part is 632.1 MPa. Copyright © 2016 John Wiley & Sons, Ltd.
Optimization of a new mathematical model for bacterial growth
USDA-ARS?s Scientific Manuscript database
The objective of this research is to optimize a new mathematical equation as a primary model to describe the growth of bacteria under constant temperature conditions. An optimization algorithm was used in combination with a numerical (Runge-Kutta) method to solve the differential form of the new gr...
Pareto Tracer: a predictor-corrector method for multi-objective optimization problems
NASA Astrophysics Data System (ADS)
Martín, Adanay; Schütze, Oliver
2018-03-01
This article proposes a novel predictor-corrector (PC) method for the numerical treatment of multi-objective optimization problems (MOPs). The algorithm, Pareto Tracer (PT), is capable of performing a continuation along the set of (local) solutions of a given MOP with k objectives, and can cope with equality and box constraints. Additionally, the first steps towards a method that manages general inequality constraints are also introduced. The properties of PT are first discussed theoretically and later numerically on several examples.
Inverse problems and optimal experiment design in unsteady heat transfer processes identification
NASA Technical Reports Server (NTRS)
Artyukhin, Eugene A.
1991-01-01
Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.
Parameter uncertainty in simulations of extreme precipitation and attribution studies.
NASA Astrophysics Data System (ADS)
Timmermans, B.; Collins, W. D.; O'Brien, T. A.; Risser, M. D.
2017-12-01
The attribution of extreme weather events, such as heavy rainfall, to anthropogenic influence involves the analysis of their probability in simulations of climate. The climate models used however, such as the Community Atmosphere Model (CAM), employ approximate physics that gives rise to "parameter uncertainty"—uncertainty about the most accurate or optimal values of numerical parameters within the model. In particular, approximate parameterisations for convective processes are well known to be influential in the simulation of precipitation extremes. Towards examining the impact of this source of uncertainty on attribution studies, we investigate the importance of components—through their associated tuning parameters—of parameterisations relating to deep and shallow convection, and cloud and aerosol microphysics in CAM. We hypothesise that as numerical resolution is increased the change in proportion of variance induced by perturbed parameters associated with the respective components is consistent with the decreasing applicability of the underlying hydrostatic assumptions. For example, that the relative influence of deep convection should diminish as resolution approaches that where convection can be resolved numerically ( 10 km). We quantify the relationship between the relative proportion of variance induced and numerical resolution by conducting computer experiments that examine precipitation extremes over the contiguous U.S. In order to mitigate the enormous computational burden of running ensembles of long climate simulations, we use variable-resolution CAM and employ both extreme value theory and surrogate modelling techniques ("emulators"). We discuss the implications of the relationship between parameterised convective processes and resolution both in the context of attribution studies and progression towards models that fully resolve convection.
Csete, Mária; Sipos, Áron; Najafi, Faraz; Hu, Xiaolong; Berggren, Karl K
2011-11-01
A finite-element method for calculating the illumination-dependence of absorption in three-dimensional nanostructures is presented based on the radio frequency module of the Comsol Multiphysics software package (Comsol AB). This method is capable of numerically determining the optical response and near-field distribution of subwavelength periodic structures as a function of illumination orientations specified by polar angle, φ, and azimuthal angle, γ. The method was applied to determine the illumination-angle-dependent absorptance in cavity-based superconducting-nanowire single-photon detector (SNSPD) designs. Niobium-nitride stripes based on dimensions of conventional SNSPDs and integrated with ~ quarter-wavelength hydrogen-silsesquioxane-filled nano-optical cavity and covered by a thin gold film acting as a reflector were illuminated from below by p-polarized light in this study. The numerical results were compared to results from complementary transfer-matrix-method calculations on composite layers made of analogous film-stacks. This comparison helped to uncover the optical phenomena contributing to the appearance of extrema in the optical response. This paper presents an approach to optimizing the absorptance of different sensing and detecting devices via simultaneous numerical optimization of the polar and azimuthal illumination angles. © 2011 Optical Society of America