Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.
NASA Astrophysics Data System (ADS)
Mendoza, Sergio; Rothenberger, Michael; Hake, Alison; Fathy, Hosam
2016-03-01
This article presents a framework for optimizing the thermal cycle to estimate a battery cell's entropy coefficient at 20% state of charge (SOC). Our goal is to maximize Fisher identifiability: a measure of the accuracy with which a parameter can be estimated. Existing protocols in the literature for estimating entropy coefficients demand excessive laboratory time. Identifiability optimization makes it possible to achieve comparable accuracy levels in a fraction of the time. This article demonstrates this result for a set of lithium iron phosphate (LFP) cells. We conduct a 24-h experiment to obtain benchmark measurements of their entropy coefficients. We optimize a thermal cycle to maximize parameter identifiability for these cells. This optimization proceeds with respect to the coefficients of a Fourier discretization of this thermal cycle. Finally, we compare the estimated parameters using (i) the benchmark test, (ii) the optimized protocol, and (iii) a 15-h test from the literature (by Forgez et al.). The results are encouraging for two reasons. First, they confirm the simulation-based prediction that the optimized experiment can produce accurate parameter estimates in 2 h, compared to 15-24. Second, the optimized experiment also estimates a thermal time constant representing the effects of thermal capacitance and convection heat transfer.
Definitive screening design enables optimization of LC-ESI-MS/MS parameters in proteomics.
Aburaya, Shunsuke; Aoki, Wataru; Minakuchi, Hiroyoshi; Ueda, Mitsuyoshi
2017-12-01
In proteomics, more than 100,000 peptides are generated from the digestion of human cell lysates. Proteome samples have a broad dynamic range in protein abundance; therefore, it is critical to optimize various parameters of LC-ESI-MS/MS to comprehensively identify these peptides. However, there are many parameters for LC-ESI-MS/MS analysis. In this study, we applied definitive screening design to simultaneously optimize 14 parameters in the operation of monolithic capillary LC-ESI-MS/MS to increase the number of identified proteins and/or the average peak area of MS1. The simultaneous optimization enabled the determination of two-factor interactions between LC and MS. Finally, we found two parameter sets of monolithic capillary LC-ESI-MS/MS that increased the number of identified proteins by 8.1% or the average peak area of MS1 by 67%. The definitive screening design would be highly useful for high-throughput analysis of the best parameter set in LC-ESI-MS/MS systems.
Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki
2015-01-01
Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.
Regenerative Medicine and Restoration of Joint Function
2012-10-01
identify the parameters that generate anatomically shaped bone substitutes of optimal composition and structure with an articulating profile. 2) to develop...strengths. An in vivo study in rabbits to evaluate these materials are ongoing. Task 2. Optimization of SFF Rolling Compaction Parameters : The work is...ongoing related to optimizing SFF rolling compaction parameters to control the density of green samples. We have used CPP powders for these studies
NASA Astrophysics Data System (ADS)
Xu, Wenfu; Hu, Zhonghua; Zhang, Yu; Liang, Bin
2017-03-01
After being launched into space to perform some tasks, the inertia parameters of a space robotic system may change due to fuel consumption, hardware reconfiguration, target capturing, and so on. For precision control and simulation, it is required to identify these parameters on orbit. This paper proposes an effective method for identifying the complete inertia parameters (including the mass, inertia tensor and center of mass position) of a space robotic system. The key to the method is to identify two types of simple dynamics systems: equivalent single-body and two-body systems. For the former, all of the joints are locked into a designed configuration and the thrusters are used for orbital maneuvering. The object function for optimization is defined in terms of acceleration and velocity of the equivalent single body. For the latter, only one joint is unlocked and driven to move along a planned (exiting) trajectory in free-floating mode. The object function is defined based on the linear and angular momentum equations. Then, the parameter identification problems are transformed into non-linear optimization problems. The Particle Swarm Optimization (PSO) algorithm is applied to determine the optimal parameters, i.e. the complete dynamic parameters of the two equivalent systems. By sequentially unlocking the 1st to nth joints (or unlocking the nth to 1st joints), the mass properties of body 0 to n (or n to 0) are completely identified. For the proposed method, only simple dynamics equations are needed for identification. The excitation motion (orbit maneuvering and joint motion) is also easily realized. Moreover, the method does not require prior knowledge of the mass properties of any body. It is general and practical for identifying a space robotic system on-orbit.
NASA Astrophysics Data System (ADS)
Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal
2013-07-01
The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.
Identification of Bouc-Wen hysteretic parameters based on enhanced response sensitivity approach
NASA Astrophysics Data System (ADS)
Wang, Li; Lu, Zhong-Rong
2017-05-01
This paper aims to identify parameters of Bouc-Wen hysteretic model using time-domain measured data. It follows a general inverse identification procedure, that is, identifying model parameters is treated as an optimization problem with the nonlinear least squares objective function. Then, the enhanced response sensitivity approach, which has been shown convergent and proper for such kind of problems, is adopted to solve the optimization problem. Numerical tests are undertaken to verify the proposed identification approach.
Fine-Tuning ADAS Algorithm Parameters for Optimizing Traffic ...
With the development of the Connected Vehicle technology that facilitates wirelessly communication among vehicles and road-side infrastructure, the Advanced Driver Assistance Systems (ADAS) can be adopted as an effective tool for accelerating traffic safety and mobility optimization at various highway facilities. To this end, the traffic management centers identify the optimal ADAS algorithm parameter set that enables the maximum improvement of the traffic safety and mobility performance, and broadcast the optimal parameter set wirelessly to individual ADAS-equipped vehicles. After adopting the optimal parameter set, the ADAS-equipped drivers become active agents in the traffic stream that work collectively and consistently to prevent traffic conflicts, lower the intensity of traffic disturbances, and suppress the development of traffic oscillations into heavy traffic jams. Successful implementation of this objective requires the analysis capability of capturing the impact of the ADAS on driving behaviors, and measuring traffic safety and mobility performance under the influence of the ADAS. To address this challenge, this research proposes a synthetic methodology that incorporates the ADAS-affected driving behavior modeling and state-of-the-art microscopic traffic flow modeling into a virtually simulated environment. Building on such an environment, the optimal ADAS algorithm parameter set is identified through an optimization programming framework to enable th
Computing elastic anisotropy to discover gum-metal-like structural alloys
NASA Astrophysics Data System (ADS)
Winter, I. S.; de Jong, M.; Asta, M.; Chrzan, D. C.
2017-08-01
The computer aided discovery of structural alloys is a burgeoning but still challenging area of research. A primary challenge in the field is to identify computable screening parameters that embody key structural alloy properties. Here, an elastic anisotropy parameter that captures a material's susceptibility to solute solution strengthening is identified. The parameter has many applications in the discovery and optimization of structural materials. As a first example, the parameter is used to identify alloys that might display the super elasticity, super strength, and high ductility of the class of TiNb alloys known as gum metals. In addition, it is noted that the parameter can be used to screen candidate alloys for shape memory response, and potentially aid in the optimization of the mechanical properties of high-entropy alloys.
NASA Technical Reports Server (NTRS)
Hotchkiss, G. B.; Burmeister, L. C.; Bishop, K. A.
1980-01-01
A discrete-gradient optimization algorithm is used to identify the parameters in a one-node and a two-node capacitance model of a flat-plate collector. Collector parameters are first obtained by a linear-least-squares fit to steady state data. These parameters, together with the collector heat capacitances, are then determined from unsteady data by use of the discrete-gradient optimization algorithm with less than 10 percent deviation from the steady state determination. All data were obtained in the indoor solar simulator at the NASA Lewis Research Center.
NASA Astrophysics Data System (ADS)
Vikram, K. Arun; Ratnam, Ch; Lakshmi, VVK; Kumar, A. Sunny; Ramakanth, RT
2018-02-01
Meta-heuristic multi-response optimization methods are widely in use to solve multi-objective problems to obtain Pareto optimal solutions during optimization. This work focuses on optimal multi-response evaluation of process parameters in generating responses like surface roughness (Ra), surface hardness (H) and tool vibration displacement amplitude (Vib) while performing operations like tangential and orthogonal turn-mill processes on A-axis Computer Numerical Control vertical milling center. Process parameters like tool speed, feed rate and depth of cut are considered as process parameters machined over brass material under dry condition with high speed steel end milling cutters using Taguchi design of experiments (DOE). Meta-heuristic like Dragonfly algorithm is used to optimize the multi-objectives like ‘Ra’, ‘H’ and ‘Vib’ to identify the optimal multi-response process parameters combination. Later, the results thus obtained from multi-objective dragonfly algorithm (MODA) are compared with another multi-response optimization technique Viz. Grey relational analysis (GRA).
NASA Astrophysics Data System (ADS)
Chiu, Y.; Nishikawa, T.
2013-12-01
With the increasing complexity of parameter-structure identification (PSI) in groundwater modeling, there is a need for robust, fast, and accurate optimizers in the groundwater-hydrology field. For this work, PSI is defined as identifying parameter dimension, structure, and value. In this study, Voronoi tessellation and differential evolution (DE) are used to solve the optimal PSI problem. Voronoi tessellation is used for automatic parameterization, whereby stepwise regression and the error covariance matrix are used to determine the optimal parameter dimension. DE is a novel global optimizer that can be used to solve nonlinear, nondifferentiable, and multimodal optimization problems. It can be viewed as an improved version of genetic algorithms and employs a simple cycle of mutation, crossover, and selection operations. DE is used to estimate the optimal parameter structure and its associated values. A synthetic numerical experiment of continuous hydraulic conductivity distribution was conducted to demonstrate the proposed methodology. The results indicate that DE can identify the global optimum effectively and efficiently. A sensitivity analysis of the control parameters (i.e., the population size, mutation scaling factor, crossover rate, and mutation schemes) was performed to examine their influence on the objective function. The proposed DE was then applied to solve a complex parameter-estimation problem for a small desert groundwater basin in Southern California. Hydraulic conductivity, specific yield, specific storage, fault conductance, and recharge components were estimated simultaneously. Comparison of DE and a traditional gradient-based approach (PEST) shows DE to be more robust and efficient. The results of this work not only provide an alternative for PSI in groundwater models, but also extend DE applications towards solving complex, regional-scale water management optimization problems.
Human-in-the-loop Bayesian optimization of wearable device parameters
Malcolm, Philippe; Speeckaert, Jozefien; Siviy, Christoper J.; Walsh, Conor J.; Kuindersma, Scott
2017-01-01
The increasing capabilities of exoskeletons and powered prosthetics for walking assistance have paved the way for more sophisticated and individualized control strategies. In response to this opportunity, recent work on human-in-the-loop optimization has considered the problem of automatically tuning control parameters based on realtime physiological measurements. However, the common use of metabolic cost as a performance metric creates significant experimental challenges due to its long measurement times and low signal-to-noise ratio. We evaluate the use of Bayesian optimization—a family of sample-efficient, noise-tolerant, and global optimization methods—for quickly identifying near-optimal control parameters. To manage experimental complexity and provide comparisons against related work, we consider the task of minimizing metabolic cost by optimizing walking step frequencies in unaided human subjects. Compared to an existing approach based on gradient descent, Bayesian optimization identified a near-optimal step frequency with a faster time to convergence (12 minutes, p < 0.01), smaller inter-subject variability in convergence time (± 2 minutes, p < 0.01), and lower overall energy expenditure (p < 0.01). PMID:28926613
NASA Astrophysics Data System (ADS)
Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar
2014-03-01
The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.
Robust design of configurations and parameters of adaptable products
NASA Astrophysics Data System (ADS)
Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua
2014-03-01
An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.
Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor
Lee, Jong Kwang; Kim, Kiho; Lee, Yongseok; Jeong, Taikyeong
2011-01-01
In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS). A laser-vision sensor (LVS), consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality. PMID:22164104
Khan, Naveed; McClean, Sally; Zhang, Shuai; Nugent, Chris
2016-01-01
In recent years, smart phones with inbuilt sensors have become popular devices to facilitate activity recognition. The sensors capture a large amount of data, containing meaningful events, in a short period of time. The change points in this data are used to specify transitions to distinct events and can be used in various scenarios such as identifying change in a patient’s vital signs in the medical domain or requesting activity labels for generating real-world labeled activity datasets. Our work focuses on change-point detection to identify a transition from one activity to another. Within this paper, we extend our previous work on multivariate exponentially weighted moving average (MEWMA) algorithm by using a genetic algorithm (GA) to identify the optimal set of parameters for online change-point detection. The proposed technique finds the maximum accuracy and F_measure by optimizing the different parameters of the MEWMA, which subsequently identifies the exact location of the change point from an existing activity to a new one. Optimal parameter selection facilitates an algorithm to detect accurate change points and minimize false alarms. Results have been evaluated based on two real datasets of accelerometer data collected from a set of different activities from two users, with a high degree of accuracy from 99.4% to 99.8% and F_measure of up to 66.7%. PMID:27792177
Study on feed forward neural network convex optimization for LiFePO4 battery parameters
NASA Astrophysics Data System (ADS)
Liu, Xuepeng; Zhao, Dongmei
2017-08-01
Based on the modern facility agriculture automatic walking equipment LiFePO4 Battery, the parameter identification of LiFePO4 Battery is analyzed. An improved method for the process model of li battery is proposed, and the on-line estimation algorithm is presented. The parameters of the battery are identified using feed forward network neural convex optimization algorithm.
Co-Optimization of Blunt Body Shapes for Moving Vehicles
NASA Technical Reports Server (NTRS)
Kinney, David J. (Inventor); Mansour, Nagi N (Inventor); Brown, James L. (Inventor); Garcia, Joseph A (Inventor); Bowles, Jeffrey V (Inventor)
2014-01-01
A method and associated system for multi-disciplinary optimization of various parameters associated with a space vehicle that experiences aerocapture and atmospheric entry in a specified atmosphere. In one embodiment, simultaneous maximization of a ratio of landed payload to vehicle atmospheric entry mass, maximization of fluid flow distance before flow separation from vehicle, and minimization of heat transfer to the vehicle are performed with respect to vehicle surface geometric parameters, and aerostructure and aerothermal vehicle response for the vehicle moving along a specified trajectory. A Pareto Optimal set of superior performance parameters is identified.
USDA-ARS?s Scientific Manuscript database
Several bio-optical algorithms were developed to estimate the chlorophyll-a (Chl-a) and phycocyanin (PC) concentrations in inland waters. This study aimed at identifying the influence of the algorithm parameters and wavelength bands on output variables and searching optimal parameter values. The opt...
NASA Astrophysics Data System (ADS)
Abdeh-Kolahchi, A.; Satish, M.; Datta, B.
2004-05-01
A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of monitoring network design.
An almost-parameter-free harmony search algorithm for groundwater pollution source identification.
Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui
2013-01-01
The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.
Parametric study of a canard-configured transport using conceptual design optimization
NASA Technical Reports Server (NTRS)
Arbuckle, P. D.; Sliwa, S. M.
1985-01-01
Constrained-parameter optimization is used to perform optimal conceptual design of both canard and conventional configurations of a medium-range transport. A number of design constants and design constraints are systematically varied to compare the sensitivities of canard and conventional configurations to a variety of technology assumptions. Main-landing-gear location and canard surface high-lift performance are identified as critical design parameters for a statically stable, subsonic, canard-configured transport.
Decision support for operations and maintenance (DSOM) system
Jarrell, Donald B [Kennewick, WA; Meador, Richard J [Richland, WA; Sisk, Daniel R [Richland, WA; Hatley, Darrel D [Kennewick, WA; Brown, Daryl R [Richland, WA; Keibel, Gary R [Richland, WA; Gowri, Krishnan [Richland, WA; Reyes-Spindola, Jorge F [Richland, WA; Adams, Kevin J [San Bruno, CA; Yates, Kenneth R [Lake Oswego, OR; Eschbach, Elizabeth J [Fort Collins, CO; Stratton, Rex C [Richland, WA
2006-03-21
A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.
Parameter assessment for virtual Stackelberg game in aerodynamic shape optimization
NASA Astrophysics Data System (ADS)
Wang, Jing; Xie, Fangfang; Zheng, Yao; Zhang, Jifa
2018-05-01
In this paper, parametric studies of virtual Stackelberg game (VSG) are conducted to assess the impact of critical parameters on aerodynamic shape optimization, including design cycle, split of design variables and role assignment. Typical numerical cases, including the inverse design and drag reduction design of airfoil, have been carried out. The numerical results confirm the effectiveness and efficiency of VSG. Furthermore, the most significant parameters are identified, e.g. the increase of design cycle can improve the optimization results but it will also add computational burden. These studies will maximize the productivity of the effort in aerodynamic optimization for more complicated engineering problems, such as the multi-element airfoil and wing-body configurations.
Structural damage identification using an enhanced thermal exchange optimization algorithm
NASA Astrophysics Data System (ADS)
Kaveh, A.; Dadras, A.
2018-03-01
The recently developed optimization algorithm-the so-called thermal exchange optimization (TEO) algorithm-is enhanced and applied to a damage detection problem. An offline parameter tuning approach is utilized to set the internal parameters of the TEO, resulting in the enhanced heat transfer optimization (ETEO) algorithm. The damage detection problem is defined as an inverse problem, and ETEO is applied to a wide range of structures. Several scenarios with noise and noise-free modal data are tested and the locations and extents of damages are identified with good accuracy.
Optimal Linking Design for Response Model Parameters
ERIC Educational Resources Information Center
Barrett, Michelle D.; van der Linden, Wim J.
2017-01-01
Linking functions adjust for differences between identifiability restrictions used in different instances of the estimation of item response model parameters. These adjustments are necessary when results from those instances are to be compared. As linking functions are derived from estimated item response model parameters, parameter estimation…
NASA Astrophysics Data System (ADS)
Khan, M. M. A.; Romoli, L.; Fiaschi, M.; Dini, G.; Sarri, F.
2011-02-01
This paper presents an experimental design approach to process parameter optimization for the laser welding of martensitic AISI 416 and AISI 440FSe stainless steels in a constrained overlap configuration in which outer shell was 0.55 mm thick. To determine the optimal laser-welding parameters, a set of mathematical models were developed relating welding parameters to each of the weld characteristics. These were validated both statistically and experimentally. The quality criteria set for the weld to determine optimal parameters were the minimization of weld width and the maximization of weld penetration depth, resistance length and shearing force. Laser power and welding speed in the range 855-930 W and 4.50-4.65 m/min, respectively, with a fiber diameter of 300 μm were identified as the optimal set of process parameters. However, the laser power and welding speed can be reduced to 800-840 W and increased to 4.75-5.37 m/min, respectively, to obtain stronger and better welds.
NASA Technical Reports Server (NTRS)
Stepner, D. E.; Mehra, R. K.
1973-01-01
A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-01-30
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUV max distributions at both pre and post treatment. This study included 57 patients that underwent 18 F-fluorodeoxyglucose ( 18 F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18 F-Fluorothymidine ( 18 F-FLT) PET scans at our institution. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18 F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18 F-FLT PET SUV distributions (P > 0.10). For both 18 F-FDG and 18 F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18 F-FDG and 18 F-FLT where a log transformation was not optimal for providing normal SUV distributions. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
NASA Astrophysics Data System (ADS)
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-02-01
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18F-FLT PET SUV distributions (P > 0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
Optimization of injection molding process parameters for a plastic cell phone housing component
NASA Astrophysics Data System (ADS)
Rajalingam, Sokkalingam; Vasant, Pandian; Khe, Cheng Seong; Merican, Zulkifli; Oo, Zeya
2016-11-01
To produce thin-walled plastic items, injection molding process is one of the most widely used application tools. However, to set optimal process parameters is difficult as it may cause to produce faulty items on injected mold like shrinkage. This study aims at to determine such an optimum injection molding process parameters which can reduce the fault of shrinkage on a plastic cell phone cover items. Currently used setting of machines process produced shrinkage and mis-specified length and with dimensions below the limit. Thus, for identification of optimum process parameters, maintaining closer targeted length and width setting magnitudes with minimal variations, more experiments are needed. The mold temperature, injection pressure and screw rotation speed are used as process parameters in this research. For optimal molding process parameters the Response Surface Methods (RSM) is applied. The major contributing factors influencing the responses were identified from analysis of variance (ANOVA) technique. Through verification runs it was found that the shrinkage defect can be minimized with the optimal setting found by RSM.
NASA Astrophysics Data System (ADS)
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941
Computational Difficulties in the Identification and Optimization of Control Systems.
1980-01-01
necessary and Identify by block number) - -. 3. iABSTRACT (Continue on revers, side It necessary and Identify by block number) As more realistic models ...Island 02912 ABSTRACT As more realistic models for resource management are developed, the need for efficient computational techniques for parameter...optimization (optimal control) in "state" models which This research was supported in part by ttfe National Science Foundation under grant NSF-MCS 79-05774
Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.
Zaitsev, M; Steinhoff, S; Shah, N J
2003-06-01
A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.
Hierarchical multistage MCMC follow-up of continuous gravitational wave candidates
NASA Astrophysics Data System (ADS)
Ashton, G.; Prix, R.
2018-05-01
Leveraging Markov chain Monte Carlo optimization of the F statistic, we introduce a method for the hierarchical follow-up of continuous gravitational wave candidates identified by wide-parameter space semicoherent searches. We demonstrate parameter estimation for continuous wave sources and develop a framework and tools to understand and control the effective size of the parameter space, critical to the success of the method. Monte Carlo tests of simulated signals in noise demonstrate that this method is close to the theoretical optimal performance.
Optimal robust control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2018-01-01
Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.
The application of artificial intelligence in the optimal design of mechanical systems
NASA Astrophysics Data System (ADS)
Poteralski, A.; Szczepanik, M.
2016-11-01
The paper is devoted to new computational techniques in mechanical optimization where one tries to study, model, analyze and optimize very complex phenomena, for which more precise scientific tools of the past were incapable of giving low cost and complete solution. Soft computing methods differ from conventional (hard) computing in that, unlike hard computing, they are tolerant of imprecision, uncertainty, partial truth and approximation. The paper deals with an application of the bio-inspired methods, like the evolutionary algorithms (EA), the artificial immune systems (AIS) and the particle swarm optimizers (PSO) to optimization problems. Structures considered in this work are analyzed by the finite element method (FEM), the boundary element method (BEM) and by the method of fundamental solutions (MFS). The bio-inspired methods are applied to optimize shape, topology and material properties of 2D, 3D and coupled 2D/3D structures, to optimize the termomechanical structures, to optimize parameters of composites structures modeled by the FEM, to optimize the elastic vibrating systems to identify the material constants for piezoelectric materials modeled by the BEM and to identify parameters in acoustics problem modeled by the MFS.
Parameter estimation using meta-heuristics in systems biology: a comprehensive review.
Sun, Jianyong; Garibaldi, Jonathan M; Hodgman, Charlie
2012-01-01
This paper gives a comprehensive review of the application of meta-heuristics to optimization problems in systems biology, mainly focussing on the parameter estimation problem (also called the inverse problem or model calibration). It is intended for either the system biologist who wishes to learn more about the various optimization techniques available and/or the meta-heuristic optimizer who is interested in applying such techniques to problems in systems biology. First, the parameter estimation problems emerging from different areas of systems biology are described from the point of view of machine learning. Brief descriptions of various meta-heuristics developed for these problems follow, along with outlines of their advantages and disadvantages. Several important issues in applying meta-heuristics to the systems biology modelling problem are addressed, including the reliability and identifiability of model parameters, optimal design of experiments, and so on. Finally, we highlight some possible future research directions in this field.
Multi-Criteria Optimization of Regulation in Metabolic Networks
Higuera, Clara; Villaverde, Alejandro F.; Banga, Julio R.; Ross, John; Morán, Federico
2012-01-01
Determining the regulation of metabolic networks at genome scale is a hard task. It has been hypothesized that biochemical pathways and metabolic networks might have undergone an evolutionary process of optimization with respect to several criteria over time. In this contribution, a multi-criteria approach has been used to optimize parameters for the allosteric regulation of enzymes in a model of a metabolic substrate-cycle. This has been carried out by calculating the Pareto set of optimal solutions according to two objectives: the proper direction of flux in a metabolic cycle and the energetic cost of applying the set of parameters. Different Pareto fronts have been calculated for eight different “environments” (specific time courses of end product concentrations). For each resulting front the so-called knee point is identified, which can be considered a preferred trade-off solution. Interestingly, the optimal control parameters corresponding to each of these points also lead to optimal behaviour in all the other environments. By calculating the average of the different parameter sets for the knee solutions more frequently found, a final and optimal consensus set of parameters can be obtained, which is an indication on the existence of a universal regulation mechanism for this system.The implications from such a universal regulatory switch are discussed in the framework of large metabolic networks. PMID:22848435
NASA Astrophysics Data System (ADS)
Harshan, S.; Roth, M.; Velasco, E.
2014-12-01
Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao Yang; Luo, Gang; Jiang, Fangming
2010-05-01
Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less
Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach
NASA Astrophysics Data System (ADS)
Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto
2017-12-01
In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \
Identification of vehicle suspension parameters by design optimization
NASA Astrophysics Data System (ADS)
Tey, J. Y.; Ramli, R.; Kheng, C. W.; Chong, S. Y.; Abidin, M. A. Z.
2014-05-01
The design of a vehicle suspension system through simulation requires accurate representation of the design parameters. These parameters are usually difficult to measure or sometimes unavailable. This article proposes an efficient approach to identify the unknown parameters through optimization based on experimental results, where the covariance matrix adaptation-evolutionary strategy (CMA-es) is utilized to improve the simulation and experimental results against the kinematic and compliance tests. This speeds up the design and development cycle by recovering all the unknown data with respect to a set of kinematic measurements through a single optimization process. A case study employing a McPherson strut suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design parameters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective optimization problem. The optimized results show a significant improvement in the correlation between the simulated model and the experimental model. Once accurate representation of the vehicle suspension model is achieved, further analysis, such as ride and handling performances, can be implemented for further optimization.
Digital adaptive flight controller development
NASA Technical Reports Server (NTRS)
Kaufman, H.; Alag, G.; Berry, P.; Kotob, S.
1974-01-01
A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Two designs are described for an example aircraft. Each of these designs uses a weighted least squares procedure to identify parameters defining the dynamics of the aircraft. The two designs differ in the way in which control law parameters are determined. One uses the solution of an optimal linear regulator problem to determine these parameters while the other uses a procedure called single stage optimization. Extensive simulation results and analysis leading to the designs are presented.
Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D
2017-01-25
Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.
Optimal error functional for parameter identification in anisotropic finite strain elasto-plasticity
NASA Astrophysics Data System (ADS)
Shutov, A. V.; Kaygorodtseva, A. A.; Dranishnikov, N. S.
2017-10-01
A problem of parameter identification for a model of finite strain elasto-plasticity is discussed. The utilized phenomenological material model accounts for nonlinear isotropic and kinematic hardening; the model kinematics is described by a nested multiplicative split of the deformation gradient. A hierarchy of optimization problems is considered. First, following the standard procedure, the material parameters are identified through minimization of a certain least square error functional. Next, the focus is placed on finding optimal weighting coefficients which enter the error functional. Toward that end, a stochastic noise with systematic and non-systematic components is introduced to the available measurement results; a superordinate optimization problem seeks to minimize the sensitivity of the resulting material parameters to the introduced noise. The advantage of this approach is that no additional experiments are required; it also provides an insight into the robustness of the identification procedure. As an example, experimental data for the steel 42CrMo4 are considered and a set of weighting coefficients is found, which is optimal in a certain class.
NASA Astrophysics Data System (ADS)
Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.
2011-12-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
NASA Astrophysics Data System (ADS)
Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.
2012-04-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
NASA Astrophysics Data System (ADS)
Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.
2012-03-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.
An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172
Are quantitative sensitivity analysis methods always reliable?
NASA Astrophysics Data System (ADS)
Huang, X.
2016-12-01
Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.
NASA Astrophysics Data System (ADS)
Zhang, Kun; Ma, Jinzhu; Zhu, Gaofeng; Ma, Ting; Han, Tuo; Feng, Li Li
2017-01-01
Global and regional estimates of daily evapotranspiration are essential to our understanding of the hydrologic cycle and climate change. In this study, we selected the radiation-based Priestly-Taylor Jet Propulsion Laboratory (PT-JPL) model and assessed it at a daily time scale by using 44 flux towers. These towers distributed in a wide range of ecological systems: croplands, deciduous broadleaf forest, evergreen broadleaf forest, evergreen needleleaf forest, grasslands, mixed forests, savannas, and shrublands. A regional land surface evapotranspiration model with a relatively simple structure, the PT-JPL model largely uses ecophysiologically-based formulation and parameters to relate potential evapotranspiration to actual evapotranspiration. The results using the original model indicate that the model always overestimates evapotranspiration in arid regions. This likely results from the misrepresentation of water limitation and energy partition in the model. By analyzing physiological processes and determining the sensitive parameters, we identified a series of parameter sets that can increase model performance. The model with optimized parameters showed better performance (R2 = 0.2-0.87; Nash-Sutcliffe efficiency (NSE) = 0.1-0.87) at each site than the original model (R2 = 0.19-0.87; NSE = -12.14-0.85). The results of the optimization indicated that the parameter β (water control of soil evaporation) was much lower in arid regions than in relatively humid regions. Furthermore, the optimized value of parameter m1 (plant control of canopy transpiration) was mostly between 1 to 1.3, slightly lower than the original value. Also, the optimized parameter Topt correlated well to the actual environmental temperature at each site. We suggest that using optimized parameters with the PT-JPL model could provide an efficient way to improve the model performance.
Back analysis of geomechanical parameters in underground engineering using artificial bee colony.
Zhu, Changxing; Zhao, Hongbo; Zhao, Ming
2014-01-01
Accurate geomechanical parameters are critical in tunneling excavation, design, and supporting. In this paper, a displacements back analysis based on artificial bee colony (ABC) algorithm is proposed to identify geomechanical parameters from monitored displacements. ABC was used as global optimal algorithm to search the unknown geomechanical parameters for the problem with analytical solution. To the problem without analytical solution, optimal back analysis is time-consuming, and least square support vector machine (LSSVM) was used to build the relationship between unknown geomechanical parameters and displacement and improve the efficiency of back analysis. The proposed method was applied to a tunnel with analytical solution and a tunnel without analytical solution. The results show the proposed method is feasible.
Optimization of wastewater treatment plant operation for greenhouse gas mitigation.
Kim, Dongwook; Bowen, James D; Ozelkan, Ertunga C
2015-11-01
This study deals with the determination of optimal operation of a wastewater treatment system for minimizing greenhouse gas emissions, operating costs, and pollution loads in the effluent. To do this, an integrated performance index that includes three objectives was established to assess system performance. The ASMN_G model was used to perform system optimization aimed at determining a set of operational parameters that can satisfy three different objectives. The complex nonlinear optimization problem was simulated using the Nelder-Mead Simplex optimization algorithm. A sensitivity analysis was performed to identify influential operational parameters on system performance. The results obtained from the optimization simulations for six scenarios demonstrated that there are apparent trade-offs among the three conflicting objectives. The best optimized system simultaneously reduced greenhouse gas emissions by 31%, reduced operating cost by 11%, and improved effluent quality by 2% compared to the base case operation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Theoretic aspects of the identification of the parameters in the optimal control model
NASA Technical Reports Server (NTRS)
Vanwijk, R. A.; Kok, J. J.
1977-01-01
The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.
The Quantum Approximation Optimization Algorithm for MaxCut: A Fermionic View
NASA Technical Reports Server (NTRS)
Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.
2017-01-01
Farhi et al. recently proposed a class of quantum algorithms, the Quantum Approximate Optimization Algorithm (QAOA), for approximately solving combinatorial optimization problems. A level-p QAOA circuit consists of steps in which a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2p times for which these two Hamiltonians are applied are the parameters of the algorithm. As p increases, however, the parameter search space grows quickly. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here, we analytically and numerically study parameter setting for QAOA applied to MAXCUT. For level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MAXCUT, the Ring of Disagrees, or the 1D antiferromagnetic ring, we provide an analysis for arbitrarily high level. Using a Fermionic representation, the evolution of the system under QAOA translates into quantum optimal control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of QAOA for any p. It also greatly simplifies numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional sub-manifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.
Hartman, Jessica H.; Cothren, Steven D.; Park, Sun-Ha; Yun, Chul-Ho; Darsey, Jerry A.; Miller, Grover P.
2013-01-01
Cytochromes P450 (CYP for isoforms) play a central role in biological processes especially metabolism of chiral molecules; thus, development of computational methods to predict parameters for chiral reactions is important for advancing this field. In this study, we identified the most optimal artificial neural networks using conformation-independent chirality codes to predict CYP2C19 catalytic parameters for enantioselective reactions. Optimization of the neural networks required identifying the most suitable representation of structure among a diverse array of training substrates, normalizing distribution of the corresponding catalytic parameters (kcat, Km, and kcat/Km), and determining the best topology for networks to make predictions. Among different structural descriptors, the use of partial atomic charges according to the CHelpG scheme and inclusion of hydrogens yielded the most optimal artificial neural networks. Their training also required resolution of poorly distributed output catalytic parameters using a Box-Cox transformation. End point leave-one-out cross correlations of the best neural networks revealed that predictions for individual catalytic parameters (kcat and Km) were more consistent with experimental values than those for catalytic efficiency (kcat/Km). Lastly, neural networks predicted correctly enantioselectivity and comparable catalytic parameters measured in this study for previously uncharacterized CYP2C19 substrates, R- and S-propranolol. Taken together, these seminal computational studies for CYP2C19 are the first to predict all catalytic parameters for enantioselective reactions using artificial neural networks and thus provide a foundation for expanding the prediction of cytochrome P450 reactions to chiral drugs, pollutants, and other biologically active compounds. PMID:23673224
Perdikaris, Paris; Karniadakis, George Em
2016-05-01
We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. © 2016 The Author(s).
Perdikaris, Paris; Karniadakis, George Em
2016-01-01
We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. PMID:27194481
Motion prediction of a non-cooperative space target
NASA Astrophysics Data System (ADS)
Zhou, Bang-Zhao; Cai, Guo-Ping; Liu, Yun-Meng; Liu, Pan
2018-01-01
Capturing a non-cooperative space target is a tremendously challenging research topic. Effective acquisition of motion information of the space target is the premise to realize target capture. In this paper, motion prediction of a free-floating non-cooperative target in space is studied and a motion prediction algorithm is proposed. In order to predict the motion of the free-floating non-cooperative target, dynamic parameters of the target must be firstly identified (estimated), such as inertia, angular momentum and kinetic energy and so on; then the predicted motion of the target can be acquired by substituting these identified parameters into the Euler's equations of the target. Accurate prediction needs precise identification. This paper presents an effective method to identify these dynamic parameters of a free-floating non-cooperative target. This method is based on two steps, (1) the rough estimation of the parameters is computed using the motion observation data to the target, and (2) the best estimation of the parameters is found by an optimization method. In the optimization problem, the objective function is based on the difference between the observed and the predicted motion, and the interior-point method (IPM) is chosen as the optimization algorithm, which starts at the rough estimate obtained in the first step and finds a global minimum to the objective function with the guidance of objective function's gradient. So the speed of IPM searching for the global minimum is fast, and an accurate identification can be obtained in time. The numerical results show that the proposed motion prediction algorithm is able to predict the motion of the target.
Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction
Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.
2018-01-01
Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
Study of optimal laser parameters for cutting QFN packages by Taguchi's matrix method
NASA Astrophysics Data System (ADS)
Li, Chen-Hao; Tsai, Ming-Jong; Yang, Ciann-Dong
2007-06-01
This paper reports the study of optimal laser parameters for cutting QFN (Quad Flat No-lead) packages by using a diode pumped solid-state laser system (DPSSL). The QFN cutting path includes two different materials, which are the encapsulated epoxy and a copper lead frame substrate. The Taguchi's experimental method with orthogonal array of L 9(3 4) is employed to obtain optimal combinatorial parameters. A quantified mechanism was proposed for examining the laser cutting quality of a QFN package. The influences of the various factors such as laser current, laser frequency, and cutting speed on the laser cutting quality is also examined. From the experimental results, the factors on the cutting quality in the order of decreasing significance are found to be (a) laser frequency, (b) cutting speed, and (c) laser driving current. The optimal parameters were obtained at the laser frequency of 2 kHz, the cutting speed of 2 mm/s, and the driving current of 29 A. Besides identifying this sequence of dominance, matrix experiment also determines the best level for each control factor. The verification experiment confirms that the application of laser cutting technology to QFN is very successfully by using the optimal laser parameters predicted from matrix experiments.
Optimization of the monitoring of landfill gas and leachate in closed methanogenic landfills.
Jovanov, Dejan; Vujić, Bogdana; Vujić, Goran
2018-06-15
Monitoring of the gas and leachate parameters in a closed landfill is a long-term activity defined by national legislative worldwide. Serbian Waste Disposal Law defines the monitoring of a landfill at least 30 years after its closing, but the definition of the monitoring extent (number and type of parameters) is incomplete. In order to define and clear all the uncertainties, this research focuses on process of monitoring optimization, using the closed landfill in Zrenjanin, Serbia, as the experimental model. The aim of optimization was to find representative parameters which would define the physical, chemical and biological processes in the closed methanogenic landfill and to make this process less expensive. Research included development of the five monitoring models with different number of gas and leachate parameters and each model has been processed in open source software GeoGebra which is often used for solving optimization problems. The results of optimization process identified the most favorable monitoring model which fulfills all the defined criteria not only from the point of view of mathematical analyses, but also from the point of view of environment protection. The final outcome of this research - the minimal required parameters which should be included in the landfill monitoring are precisely defined. Copyright © 2017 Elsevier Ltd. All rights reserved.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 2: Analytic manual
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows subproblems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
Stroet, Martin; Koziara, Katarzyna B; Malde, Alpeshkumar K; Mark, Alan E
2017-12-12
A general method for parametrizing atomic interaction functions is presented. The method is based on an analysis of surfaces corresponding to the difference between calculated and target data as a function of alternative combinations of parameters (parameter space mapping). The consideration of surfaces in parameter space as opposed to local values or gradients leads to a better understanding of the relationships between the parameters being optimized and a given set of target data. This in turn enables for a range of target data from multiple molecules to be combined in a robust manner and for the optimal region of parameter space to be trivially identified. The effectiveness of the approach is illustrated by using the method to refine the chlorine 6-12 Lennard-Jones parameters against experimental solvation free enthalpies in water and hexane as well as the density and heat of vaporization of the liquid at atmospheric pressure for a set of 10 aromatic-chloro compounds simultaneously. Single-step perturbation is used to efficiently calculate solvation free enthalpies for a wide range of parameter combinations. The capacity of this approach to parametrize accurate and transferrable force fields is discussed.
Hypothesis-driven classification of materials using nuclear magnetic resonance relaxometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Espy, Michelle A.; Matlashov, Andrei N.; Schultz, Larry J.
Technologies related to identification of a substance in an optimized manner are provided. A reference group of known materials is identified. Each known material has known values for several classification parameters. The classification parameters comprise at least one of T.sub.1, T.sub.2, T.sub.1.rho., a relative nuclear susceptibility (RNS) of the substance, and an x-ray linear attenuation coefficient (LAC) of the substance. A measurement sequence is optimized based on at least one of a measurement cost of each of the classification parameters and an initial probability of each of the known materials in the reference group.
Sun, Jie; Li, Zhengdong; Pan, Shaoyou; Feng, Hao; Shao, Yu; Liu, Ningguo; Huang, Ping; Zou, Donghua; Chen, Yijiu
2018-05-01
The aim of the present study was to develop an improved method, using MADYMO multi-body simulation software combined with an optimization method and three-dimensional (3D) motion capture, for identifying the pre-impact conditions of a cyclist (walking or cycling) involved in a vehicle-bicycle accident. First, a 3D motion capture system was used to analyze coupled motions of a volunteer while walking and cycling. The motion capture results were used to define the posture of the human model during walking and cycling simulations. Then, cyclist, bicycle and vehicle models were developed. Pre-impact parameters of the models were treated as unknown design variables. Finally, a multi-objective genetic algorithm, the nondominated sorting genetic algorithm II, was used to find optimal solutions. The objective functions of the walk parameter were significantly lower than cycle parameter; thus, the cyclist was more likely to have been walking with the bicycle than riding the bicycle. In the most closely matched result found, all observed contact points matched and the injury parameters correlated well with the real injuries sustained by the cyclist. Based on the real accident reconstruction, the present study indicates that MADYMO multi-body simulation software, combined with an optimization method and 3D motion capture, can be used to identify the pre-impact conditions of a cyclist involved in a vehicle-bicycle accident. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Lee, X. N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Shazzuan, S.
2017-09-01
Plastic injection moulding is a popular manufacturing method not only it is reliable, but also efficient and cost saving. It able to produce plastic part with detailed features and complex geometry. However, defects in injection moulding process degrades the quality and aesthetic of the injection moulded product. The most common defect occur in the process is warpage. Inappropriate process parameter setting of injection moulding machine is one of the reason that leads to the occurrence of warpage. The aims of this study were to improve the quality of injection moulded part by investigating the optimal parameters in minimizing warpage using Response Surface Methodology (RSM) and Glowworm Swarm Optimization (GSO). Subsequent to this, the most significant parameter was identified and recommended parameters setting was compared with the optimized parameter setting using RSM and GSO. In this research, the mobile phone case was selected as case study. The mould temperature, melt temperature, packing pressure, packing time and cooling time were selected as variables whereas warpage in y-direction was selected as responses in this research. The simulation was carried out by using Autodesk Moldflow Insight 2012. In addition, the RSM was performed by using Design Expert 7.0 whereas the GSO was utilized by using MATLAB. The warpage in y direction recommended by RSM were reduced by 70 %. The warpages recommended by GSO were decreased by 61 % in y direction. The resulting warpages under optimal parameter setting by RSM and GSO were validated by simulation in AMI 2012. RSM performed better than GSO in solving warpage issue.
Non-adaptive and adaptive hybrid approaches for enhancing water quality management
NASA Astrophysics Data System (ADS)
Kalwij, Ineke M.; Peralta, Richard C.
2008-09-01
SummaryUsing optimization to help solve groundwater management problems cost-effectively is becoming increasingly important. Hybrid optimization approaches, that combine two or more optimization algorithms, will become valuable and common tools for addressing complex nonlinear hydrologic problems. Hybrid heuristic optimizers have capabilities far beyond those of a simple genetic algorithm (SGA), and are continuously improving. SGAs having only parent selection, crossover, and mutation are inefficient and rarely used for optimizing contaminant transport management. Even an advanced genetic algorithm (AGA) that includes elitism (to emphasize using the best strategies as parents) and healing (to help assure optimal strategy feasibility) is undesirably inefficient. Much more efficient than an AGA is the presented hybrid (AGCT), which adds comprehensive tabu search (TS) features to an AGA. TS mechanisms (TS probability, tabu list size, search coarseness and solution space size, and a TS threshold value) force the optimizer to search portions of the solution space that yield superior pumping strategies, and to avoid reproducing similar or inferior strategies. An AGCT characteristic is that TS control parameters are unchanging during optimization. However, TS parameter values that are ideal for optimization commencement can be undesirable when nearing assumed global optimality. The second presented hybrid, termed global converger (GC), is significantly better than the AGCT. GC includes AGCT plus feedback-driven auto-adaptive control that dynamically changes TS parameters during run-time. Before comparing AGCT and GC, we empirically derived scaled dimensionless TS control parameter guidelines by evaluating 50 sets of parameter values for a hypothetical optimization problem. For the hypothetical area, AGCT optimized both well locations and pumping rates. The parameters are useful starting values because using trial-and-error to identify an ideal combination of control parameter values for a new optimization problem can be time consuming. For comparison, AGA, AGCT, and GC are applied to optimize pumping rates for assumed well locations of a complex large-scale contaminant transport and remediation optimization problem at Blaine Naval Ammunition Depot (NAD). Both hybrid approaches converged more closely to the optimal solution than the non-hybrid AGA. GC averaged 18.79% better convergence than AGCT, and 31.9% than AGA, within the same computation time (12.5 days). AGCT averaged 13.1% better convergence than AGA. The GC can significantly reduce the burden of employing computationally intensive hydrologic simulation models within a limited time period and for real-world optimization problems. Although demonstrated for a groundwater quality problem, it is also applicable to other arenas, such as managing salt water intrusion and surface water contaminant loading.
SPECT System Optimization Against A Discrete Parameter Space
Meng, L. J.; Li, N.
2013-01-01
In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609
Characterization of classical static noise via qubit as probe
NASA Astrophysics Data System (ADS)
Javed, Muhammad; Khan, Salman; Ullah, Sayed Arif
2018-03-01
The dynamics of quantum Fisher information (QFI) of a single qubit coupled to classical static noise is investigated. The analytical relation for QFI fixes the optimal initial state of the qubit that maximizes it. An approximate limit for the time of coupling that leads to physically useful results is identified. Moreover, using the approach of quantum estimation theory and the analytical relation for QFI, the qubit is used as a probe to precisely estimate the disordered parameter of the environment. Relation for optimal interaction time with the environment is obtained, and condition for the optimal measurement of the noise parameter of the environment is given. It is shown that all values, in the mentioned range, of the noise parameter are estimable with equal precision. A comparison of our results with the previous studies in different classical environments is made.
Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback
NASA Astrophysics Data System (ADS)
Bruni, Renato; Celani, Fabio
2016-10-01
The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.
Cahyadi, Christine; Heng, Paul Wan Sia; Chan, Lai Wah
2011-03-01
The aim of this study was to identify and optimize the critical process parameters of the newly developed Supercell quasi-continuous coater for optimal tablet coat quality. Design of experiments, aided by multivariate analysis techniques, was used to quantify the effects of various coating process conditions and their interactions on the quality of film-coated tablets. The process parameters varied included batch size, inlet temperature, atomizing pressure, plenum pressure, spray rate and coating level. An initial screening stage was carried out using a 2(6-1(IV)) fractional factorial design. Following these preliminary experiments, optimization study was carried out using the Box-Behnken design. Main response variables measured included drug-loading efficiency, coat thickness variation, and the extent of tablet damage. Apparent optimum conditions were determined by using response surface plots. The process parameters exerted various effects on the different response variables. Hence, trade-offs between individual optima were necessary to obtain the best compromised set of conditions. The adequacy of the optimized process conditions in meeting the combined goals for all responses was indicated by the composite desirability value. By using response surface methodology and optimization, coating conditions which produced coated tablets of high drug-loading efficiency, low incidences of tablet damage and low coat thickness variation were defined. Optimal conditions were found to vary over a large spectrum when different responses were considered. Changes in processing parameters across the design space did not result in drastic changes to coat quality, thereby demonstrating robustness in the Supercell coating process. © 2010 American Association of Pharmaceutical Scientists
A Cost-Effective Approach to Optimizing Microstructure and Magnetic Properties in Ce17Fe78B₆ Alloys.
Tan, Xiaohua; Li, Heyun; Xu, Hui; Han, Ke; Li, Weidan; Zhang, Fang
2017-07-28
Optimizing fabrication parameters for rapid solidification of Re-Fe-B (Re = Rare earth) alloys can lead to nanocrystalline products with hard magnetic properties without any heat-treatment. In this work, we enhanced the magnetic properties of Ce 17 Fe 78 B₆ ribbons by engineering both the microstructure and volume fraction of the Ce₂Fe 14 B phase through optimization of the chamber pressure and the wheel speed necessary for quenching the liquid. We explored the relationship between these two parameters (chamber pressure and wheel speed), and proposed an approach to identifying the experimental conditions most likely to yield homogenous microstructure and reproducible magnetic properties. Optimized experimental conditions resulted in a microstructure with homogeneously dispersed Ce₂Fe 14 B and CeFe₂ nanocrystals. The best magnetic properties were obtained at a chamber pressure of 0.05 MPa and a wheel speed of 15 m·s -1 . Without the conventional heat-treatment that is usually required, key magnetic properties were maximized by optimization processing parameters in rapid solidification of magnetic materials in a cost-effective manner.
Ismail, Ahmad Muhaimin; Mohamad, Mohd Saberi; Abdul Majid, Hairudin; Abas, Khairul Hamimah; Deris, Safaai; Zaki, Nazar; Mohd Hashim, Siti Zaiton; Ibrahim, Zuwairie; Remli, Muhammad Akmal
2017-12-01
Mathematical modelling is fundamental to understand the dynamic behavior and regulation of the biochemical metabolisms and pathways that are found in biological systems. Pathways are used to describe complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. However, measuring these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Computational approaches are required to estimate these parameters. The estimation is converted into multimodal optimization problems that require a global optimization algorithm that can avoid local solutions. These local solutions can lead to a bad fit when calibrating with a model. Although the model itself can potentially match a set of experimental data, a high-performance estimation algorithm is required to improve the quality of the solutions. This paper describes an improved hybrid of particle swarm optimization and the gravitational search algorithm (IPSOGSA) to improve the efficiency of a global optimum (the best set of kinetic parameter values) search. The findings suggest that the proposed algorithm is capable of narrowing down the search space by exploiting the feasible solution areas. Hence, the proposed algorithm is able to achieve a near-optimal set of parameters at a fast convergence speed. The proposed algorithm was tested and evaluated based on two aspartate pathways that were obtained from the BioModels Database. The results show that the proposed algorithm outperformed other standard optimization algorithms in terms of accuracy and near-optimal kinetic parameter estimation. Nevertheless, the proposed algorithm is only expected to work well in small scale systems. In addition, the results of this study can be used to estimate kinetic parameter values in the stage of model selection for different experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 1: User's guide
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.
1992-01-01
IPOST is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence fo trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the coat function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
Critical mass of public goods and its coevolution with cooperation
NASA Astrophysics Data System (ADS)
Shi, Dong-Mei; Wang, Bing-Hong
2017-07-01
In this study, the enhancing parameter represented the value of the public goods to the public in public goods game, and was rescaled to a Fermi-Dirac distribution function of critical mass. Public goods were divided into two categories, consumable and reusable public goods, and their coevolution with cooperative behavior was studied. We observed that for both types of public goods, cooperation was promoted as the enhancing parameter increased when the value of critical mass was not very large. An optimal value of critical mass which led to the best cooperation was identified. We also found that cooperations emerged earlier for reusable public goods, and defections became extinct earlier for the consumable public goods. Moreover, we observed that a moderate depreciation rate for public goods resulted in an optimal cooperation, and this range became wider as the enhancing parameter increased. The noise influence on cooperation was studied, and it was shown that cooperation density varied non-monotonically as noise amplitude increased for reusable public goods, whereas decreased monotonically for consumable public goods. Furthermore, existence of the optimal critical mass was also identified in other three regular networks. Finally, simulation results were utilized to analyze the provision of public goods in detail.
Differential Evolution Optimization for Targeting Spacecraft Maneuver Plans
NASA Technical Reports Server (NTRS)
Mattern, Daniel
2016-01-01
Previous analysis identified specific orbital parameters as being safer for conjunction avoidance for the TDRS fleet. With TDRS-9 being considered an at-risk spacecraft, a potential conjunction concern was raised should TDRS-9 fail while at a longitude of 12W. This document summarizes the analysis performed to identify if these specific orbital parameters could be targeted using the remaining drift-termination maneuvers for the relocation of TDRS-9 from 41W longitude to 12W longitude.
Quantum approximate optimization algorithm for MaxCut: A fermionic view
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.
2018-02-01
Farhi et al. recently proposed a class of quantum algorithms, the quantum approximate optimization algorithm (QAOA), for approximately solving combinatorial optimization problems (E. Farhi et al., arXiv:1411.4028;
Zheng, Hong; Clausen, Morten Rahr; Dalsgaard, Trine Kastrup; Mortensen, Grith; Bertram, Hanne Christine
2013-08-06
We describe a time-saving protocol for the processing of LC-MS-based metabolomics data by optimizing parameter settings in XCMS and threshold settings for removing noisy and low-intensity peaks using design of experiment (DoE) approaches including Plackett-Burman design (PBD) for screening and central composite design (CCD) for optimization. A reliability index, which is based on evaluation of the linear response to a dilution series, was used as a parameter for the assessment of data quality. After identifying the significant parameters in the XCMS software by PBD, CCD was applied to determine their values by maximizing the reliability and group indexes. Optimal settings by DoE resulted in improvements of 19.4% and 54.7% in the reliability index for a standard mixture and human urine, respectively, as compared with the default setting, and a total of 38 h was required to complete the optimization. Moreover, threshold settings were optimized by using CCD for further improvement. The approach combining optimal parameter setting and the threshold method improved the reliability index about 9.5 times for a standards mixture and 14.5 times for human urine data, which required a total of 41 h. Validation results also showed improvements in the reliability index of about 5-7 times even for urine samples from different subjects. It is concluded that the proposed methodology can be used as a time-saving approach for improving the processing of LC-MS-based metabolomics data.
Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende
2014-01-01
Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.
Min-Chi Hsiao; Pen-Ning Yu; Dong Song; Liu, Charles Y; Heck, Christi N; Millett, David; Berger, Theodore W
2014-01-01
New interventions using neuromodulatory devices such as vagus nerve stimulation, deep brain stimulation and responsive neurostimulation are available or under study for the treatment of refractory epilepsy. Since the actual mechanisms of the onset and termination of the seizure are still unclear, most researchers or clinicians determine the optimal stimulation parameters through trial-and-error procedures. It is necessary to further explore what types of electrical stimulation parameters (these may include stimulation frequency, amplitude, duration, interval pattern, and location) constitute a set of optimal stimulation paradigms to suppress seizures. In a previous study, we developed an in vitro epilepsy model using hippocampal slices from patients suffering from mesial temporal lobe epilepsy. Using a planar multi-electrode array system, inter-ictal activity from human hippocampal slices was consistently recorded. In this study, we have further transferred this in vitro seizure model to a testbed for exploring the possible neurostimulation paradigms to inhibit inter-ictal spikes. The methodology used to collect the electrophysiological data, the approach to apply different electrical stimulation parameters to the slices are provided in this paper. The results show that this experimental testbed will provide a platform for testing the optimal stimulation parameters of seizure cessation. We expect this testbed will expedite the process for identifying the most effective parameters, and may ultimately be used to guide programming of new stimulating paradigms for neuromodulatory devices.
NASA Astrophysics Data System (ADS)
Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu
2016-01-01
An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.
Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.
Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V
2016-01-01
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
TOPSIS based parametric optimization of laser micro-drilling of TBC coated nickel based superalloy
NASA Astrophysics Data System (ADS)
Parthiban, K.; Duraiselvam, Muthukannan; Manivannan, R.
2018-06-01
The technique for order of preference by similarity ideal solution (TOPSIS) approach was used for optimizing the process parameters of laser micro-drilling of nickel superalloy C263 with Thermal Barrier Coating (TBC). Plasma spraying was used to deposit the TBC and a pico-second Nd:YAG pulsed laser was used to drill the specimens. Drilling angle, laser scan speed and number of passes were considered as input parameters. Based on the machining conditions, Taguchi L8 orthogonal array was used for conducting the experimental runs. The surface roughness and surface crack density (SCD) were considered as the output measures. The surface roughness was measured using 3D White Light Interferometer (WLI) and the crack density was measured using Scanning Electron Microscope (SEM). The optimized result achieved from this approach suggests reduced surface roughness and surface crack density. The holes drilled at an inclination angle of 45°, laser scan speed of 3 mm/s and 400 number of passes found to be optimum. From the Analysis of variance (ANOVA), inclination angle and number of passes were identified as the major influencing parameter. The optimized parameter combination exhibited a 19% improvement in surface finish and 12% reduction in SCD.
Chen, Yantian; Bloemen, Veerle; Impens, Saartje; Moesen, Maarten; Luyten, Frank P; Schrooten, Jan
2011-12-01
Cell seeding into scaffolds plays a crucial role in the development of efficient bone tissue engineering constructs. Hence, it becomes imperative to identify the key factors that quantitatively predict reproducible and efficient seeding protocols. In this study, the optimization of a cell seeding process was investigated using design of experiments (DOE) statistical methods. Five seeding factors (cell type, scaffold type, seeding volume, seeding density, and seeding time) were selected and investigated by means of two response parameters, critically related to the cell seeding process: cell seeding efficiency (CSE) and cell-specific viability (CSV). In addition, cell spatial distribution (CSD) was analyzed by Live/Dead staining assays. Analysis identified a number of statistically significant main factor effects and interactions. Among the five seeding factors, only seeding volume and seeding time significantly affected CSE and CSV. Also, cell and scaffold type were involved in the interactions with other seeding factors. Within the investigated ranges, optimal conditions in terms of CSV and CSD were obtained when seeding cells in a regular scaffold with an excess of medium. The results of this case study contribute to a better understanding and definition of optimal process parameters for cell seeding. A DOE strategy can identify and optimize critical process variables to reduce the variability and assists in determining which variables should be carefully controlled during good manufacturing practice production to enable a clinically relevant implant.
NASA Astrophysics Data System (ADS)
Gibbons, Gregory John; Hansell, Robert George
2006-09-01
This article details the down-selection procedure for thermally sprayed coatings for aluminum injection mould tooling. A down-selection metric was used to rank a wide range of coatings. A range of high-velocity oxyfuel (HVOF) and atmospheric plasma spray (APS) systems was used to identify the optimal coating-process-system combinations. Three coatings were identified as suitable for further study; two CrC NiCr materials and one Fe Ni Cr alloy. No APS-deposited coatings were suitable for the intended application due to poor substrate adhesion (SA) and very high surface roughness (SR). The DJ2700 deposited coating properties were inferior to the coatings deposited using other HVOF systems and thus a Taguchi L18 five parameter, three-level optimization was used to optimize SA of CRC-1 and FE-1. Significant mean increases in bond strength were achieved (147±30% for FE-1 [58±4 MPa] and 12±1% for CRC-1 [67±5 MPa]). An analysis of variance (ANOVA) indicated that the coating bond strengths were primarily dependent on powder flow rate and propane gas flow rate, and also secondarily dependent on spray distance. The optimal deposition parameters identified were: (CRC-1/FE-1) O2 264/264 standard liters per minute (SLPM); C3H8 62/73 SLPM; air 332/311 SLPM; feed rate 30/28 g/min; and spray distance 150/206 mm.
Multiobjective constraints for climate model parameter choices: Pragmatic Pareto fronts in CESM1
NASA Astrophysics Data System (ADS)
Langenbrunner, B.; Neelin, J. D.
2017-09-01
Global climate models (GCMs) are examples of high-dimensional input-output systems, where model output is a function of many variables, and an update in model physics commonly improves performance in one objective function (i.e., measure of model performance) at the expense of degrading another. Here concepts from multiobjective optimization in the engineering literature are used to investigate parameter sensitivity and optimization in the face of such trade-offs. A metamodeling technique called cut high-dimensional model representation (cut-HDMR) is leveraged in the context of multiobjective optimization to improve GCM simulation of the tropical Pacific climate, focusing on seasonal precipitation, column water vapor, and skin temperature. An evolutionary algorithm is used to solve for Pareto fronts, which are surfaces in objective function space along which trade-offs in GCM performance occur. This approach allows the modeler to visualize trade-offs quickly and identify the physics at play. In some cases, Pareto fronts are small, implying that trade-offs are minimal, optimal parameter value choices are more straightforward, and the GCM is well-functioning. In all cases considered here, the control run was found not to be Pareto-optimal (i.e., not on the front), highlighting an opportunity for model improvement through objectively informed parameter selection. Taylor diagrams illustrate that these improvements occur primarily in field magnitude, not spatial correlation, and they show that specific parameter updates can improve fields fundamental to tropical moist processes—namely precipitation and skin temperature—without significantly impacting others. These results provide an example of how basic elements of multiobjective optimization can facilitate pragmatic GCM tuning processes.
Dimensions of design space: a decision-theoretic approach to optimal research design.
Conti, Stefano; Claxton, Karl
2009-01-01
Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.
Space shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.
Parameterized Algorithmics for Finding Exact Solutions of NP-Hard Biological Problems.
Hüffner, Falk; Komusiewicz, Christian; Niedermeier, Rolf; Wernicke, Sebastian
2017-01-01
Fixed-parameter algorithms are designed to efficiently find optimal solutions to some computationally hard (NP-hard) problems by identifying and exploiting "small" problem-specific parameters. We survey practical techniques to develop such algorithms. Each technique is introduced and supported by case studies of applications to biological problems, with additional pointers to experimental results.
NASA Astrophysics Data System (ADS)
Wang, Xu; Bi, Fengrong; Du, Haiping
2018-05-01
This paper aims to develop an 5-degree-of-freedom driver and seating system model for optimal vibration control. A new method for identification of the driver seating system parameters from experimental vibration measurement has been developed. The parameter sensitivity analysis has been conducted considering the random excitation frequency and system parameter uncertainty. The most and least sensitive system parameters for the transmissibility ratio have been identified. The optimised PID controllers have been developed to reduce the driver's body vibration.
Yang, Jian; Liu, Chuangui; Wang, Boqian; Ding, Xianting
2017-10-13
Superhydrophobic surface, as a promising micro/nano material, has tremendous applications in biological and artificial investigations. The electrohydrodynamics (EHD) technique is a versatile and effective method for fabricating micro- to nanoscale fibers and particles from a variety of materials. A combination of critical parameters, such as mass fraction, ratio of N, N-Dimethylformamide (DMF) to Tetrahydrofuran (THF), inner diameter of needle, feed rate, receiving distance, applied voltage as well as temperature, during electrospinning process, to determine the morphology of the electrospun membranes, which in turn determines the superhydrophobic property of the membrane. In this study, we applied a recently developed feedback system control (FSC) scheme for rapid identification of the optimal combination of these controllable parameters to fabricate superhydrophobic surface by one-step electrospinning method without any further modification. Within five rounds of experiments by testing totally forty-six data points, FSC scheme successfully identified an optimal parameter combination that generated electrospun membranes with a static water contact angle of 160 degrees or larger. Scanning electron microscope (SEM) imaging indicates that the FSC optimized surface attains unique morphology. The optimized setup introduced here therefore serves as a one-step, straightforward, and economic approach to fabricate superhydrophobic surface with electrospinning approach.
Coupled Low-thrust Trajectory and System Optimization via Multi-Objective Hybrid Optimal Control
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Englander, Jacob Aldo; Ghosh, Alexander R.
2015-01-01
The optimization of low-thrust trajectories is tightly coupled with the spacecraft hardware. Trading trajectory characteristics with system parameters ton identify viable solutions and determine mission sensitivities across discrete hardware configurations is labor intensive. Local independent optimization runs can sample the design space, but a global exploration that resolves the relationships between the system variables across multiple objectives enables a full mapping of the optimal solution space. A multi-objective, hybrid optimal control algorithm is formulated using a multi-objective genetic algorithm as an outer loop systems optimizer around a global trajectory optimizer. The coupled problem is solved simultaneously to generate Pareto-optimal solutions in a single execution. The automated approach is demonstrated on two boulder return missions.
Long, Zhili; Wang, Rui; Fang, Jiwen; Dai, Xufei; Li, Zuohua
2017-07-01
Piezoelectric actuators invariably exhibit hysteresis nonlinearities that tend to become significant under the open-loop condition and could cause oscillations and errors in nanometer-positioning tasks. Chaotic map modified particle swarm optimization (MPSO) is proposed and implemented to identify the Prandtl-Ishlinskii model for piezoelectric actuators. Hysteresis compensation is attained through application of an inverse Prandtl-Ishlinskii model, in which the parameters are formulated based on the original model with chaotic map MPSO. To strengthen the diversity and improve the searching ergodicity of the swarm, an initial method of adaptive inertia weight based on a chaotic map is proposed. To compare and prove that the swarm's convergence occurs before stochastic initialization and to attain an optimal particle swarm optimization algorithm, the parameters of a proportional-integral-derivative controller are searched using self-tuning, and the simulated results are used to verify the search effectiveness of chaotic map MPSO. The results show that chaotic map MPSO is superior to its competitors for identifying the Prandtl-Ishlinskii model and that the inverse Prandtl-Ishlinskii model can provide hysteresis compensation under different conditions in a simple and effective manner.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1994-01-01
Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.
Computer Optimization of Biodegradable Nanoparticles Fabricated by Dispersion Polymerization.
Akala, Emmanuel O; Adesina, Simeon; Ogunwuyi, Oluwaseun
2015-12-22
Quality by design (QbD) in the pharmaceutical industry involves designing and developing drug formulations and manufacturing processes which ensure predefined drug product specifications. QbD helps to understand how process and formulation variables affect product characteristics and subsequent optimization of these variables vis-à-vis final specifications. Statistical design of experiments (DoE) identifies important parameters in a pharmaceutical dosage form design followed by optimizing the parameters with respect to certain specifications. DoE establishes in mathematical form the relationships between critical process parameters together with critical material attributes and critical quality attributes. We focused on the fabrication of biodegradable nanoparticles by dispersion polymerization. Aided by a statistical software, d-optimal mixture design was used to vary the components (crosslinker, initiator, stabilizer, and macromonomers) to obtain twenty nanoparticle formulations (PLLA-based nanoparticles) and thirty formulations (poly-ɛ-caprolactone-based nanoparticles). Scheffe polynomial models were generated to predict particle size (nm), zeta potential, and yield (%) as functions of the composition of the formulations. Simultaneous optimizations were carried out on the response variables. Solutions were returned from simultaneous optimization of the response variables for component combinations to (1) minimize nanoparticle size; (2) maximize the surface negative zeta potential; and (3) maximize percent yield to make the nanoparticle fabrication an economic proposition.
NASA Astrophysics Data System (ADS)
Venkata Subbaiah, K.; Raju, Ch.; Suresh, Ch.
2017-08-01
The present study aims to compare the conventional cutting inserts with wiper cutting inserts during the hard turning of AISI 4340 steel at different workpiece hardness. Type of insert, hardness, cutting speed, feed, and depth of cut are taken as process parameters. Taguchi’s L18 orthogonal array was used to conduct the experimental tests. Parametric analysis carried in order to know the influence of each process parameter on the three important Surface Roughness Characteristics (Ra, Rz, and Rt) and Material Removal Rate. Taguchi based Grey Relational Analysis (GRA) used to optimize the process parameters for individual response and multi-response outputs. Additionally, the analysis of variance (ANOVA) is also applied to identify the most significant factor.
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
NASA Astrophysics Data System (ADS)
Mohanty, Itishree; Chintha, Appa Rao; Kundu, Saurabh
2018-06-01
The optimization of process parameters and composition is essential to achieve the desired properties with minimal additions of alloying elements in microalloyed steels. In some cases, it may be possible to substitute such steels for those which are more richly alloyed. However, process control involves a larger number of parameters, making the relationship between structure and properties difficult to assess. In this work, neural network models have been developed to estimate the mechanical properties of steels containing Nb + V or Nb + Ti. The outcomes have been validated by thermodynamic calculations and plant data. It has been shown that subtle thermodynamic trends can be captured by the neural network model. Some experimental rolling data have also been used to support the model, which in addition has been applied to calculate the costs of optimizing microalloyed steel. The generated pareto fronts identify many combinations of strength and elongation, making it possible to select composition and process parameters for a range of applications. The ANN model and the optimization model are being used for prediction of properties in a running plant and for development of new alloys, respectively.
Lumped parametric model of the human ear for sound transmission.
Feng, Bin; Gan, Rong Z
2004-09-01
A lumped parametric model of the human auditoria peripherals consisting of six masses suspended with six springs and ten dashpots was proposed. This model will provide the quantitative basis for the construction of a physical model of the human middle ear. The lumped model parameters were first identified using published anatomical data, and then determined through a parameter optimization process. The transfer function of the middle ear obtained from human temporal bone experiments with laser Doppler interferometers was used for creating the target function during the optimization process. It was found that, among 14 spring and dashpot parameters, there were five parameters which had pronounced effects on the dynamic behaviors of the model. The detailed discussion on the sensitivity of those parameters was provided with appropriate applications for sound transmission in the ear. We expect that the methods for characterizing the lumped model of the human ear and the model parameters will be useful for theoretical modeling of the ear function and construction of the ear physical model.
Advanced Interactive Display Formats for Terminal Area Traffic Control
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Shaviv, G. E.
1999-01-01
This research project deals with an on-line dynamic method for automated viewing parameter management in perspective displays. Perspective images are optimized such that a human observer will perceive relevant spatial geometrical features with minimal errors. In order to compute the errors at which observers reconstruct spatial features from perspective images, a visual spatial-perception model was formulated. The model was employed as the basis of an optimization scheme aimed at seeking the optimal projection parameter setting. These ideas are implemented in the context of an air traffic control (ATC) application. A concept, referred to as an active display system, was developed. This system uses heuristic rules to identify relevant geometrical features of the three-dimensional air traffic situation. Agile, on-line optimization was achieved by a specially developed and custom-tailored genetic algorithm (GA), which was to deal with the multi-modal characteristics of the objective function and exploit its time-evolving nature.
Cho, Ming-Yuan; Hoang, Thi Thom
2017-01-01
Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.
Huang, X N; Ren, H P
2016-05-13
Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation.
Optimization of laser welding thin-gage galvanized steel via response surface methodology
NASA Astrophysics Data System (ADS)
Zhao, Yangyang; Zhang, Yansong; Hu, Wei; Lai, Xinmin
2012-09-01
The increasing demand of light weight and durability makes thin-gage galvanized steels (<0.6 mm) attractive for future automotive applications. Laser welding, well known for its deep penetration, high speed and small heat affected zone, provides a potential solution for welding thin-gage galvanized steels in automotive industry. In this study, the effect of the laser welding parameters (i.e. laser power, welding speed, gap and focal position) on the weld bead geometry (i.e. weld depth, weld width and surface concave) of 0.4 mm-thick galvanized SAE1004 steel in a lap joint configuration has been investigated by experiments. The process windows of the concerned process parameters were therefore determined. Then, response surface methodology (RSM) was used to develop models to predict the relationship between the processing parameters and the laser weld bead profile and identify the correct and optimal combination of the laser welding input variables to obtain superior weld joint. Under the optimal welding parameters, defect-free weld were produced, and the average aspect ratio increased about 30%, from 0.62 to 0.83.
Vyska, Martin; Cunniffe, Nik; Gilligan, Christopher
2016-10-01
The deployment of crop varieties that are partially resistant to plant pathogens is an important method of disease control. However, a trade-off may occur between the benefits of planting the resistant variety and a yield penalty, whereby the standard susceptible variety outyields the resistant one in the absence of disease. This presents a dilemma: deploying the resistant variety is advisable only if the disease occurs and is sufficient for the resistant variety to outyield the infected standard variety. Additionally, planting the resistant variety carries with it a further advantage in that the resistant variety reduces the probability of disease invading. Therefore, viewed from the perspective of a grower community, there is likely to be an optimal trade-off and thus an optimal cropping density for the resistant variety. We introduce a simple stochastic, epidemiological model to investigate the trade-off and the consequences for crop yield. Focusing on susceptible-infected-removed epidemic dynamics, we use the final size equation to calculate the surviving host population in order to analyse the yield, an approach suitable for rapid epidemics in agricultural crops. We identify a single compound parameter, which we call the efficacy of resistance and which incorporates the changes in susceptibility, infectivity and durability of the resistant variety. We use the compound parameter to inform policy plots that can be used to identify the optimal strategy for given parameter values when an outbreak is certain. When the outbreak is uncertain, we show that for some parameter values planting the resistant variety is optimal even when it would not be during the outbreak. This is because the resistant variety reduces the probability of an outbreak occurring. © 2016 The Author(s).
Dynamical modeling and multi-experiment fitting with PottersWheel
Maiwald, Thomas; Timmer, Jens
2008-01-01
Motivation: Modelers in Systems Biology need a flexible framework that allows them to easily create new dynamic models, investigate their properties and fit several experimental datasets simultaneously. Multi-experiment-fitting is a powerful approach to estimate parameter values, to check the validity of a given model, and to discriminate competing model hypotheses. It requires high-performance integration of ordinary differential equations and robust optimization. Results: We here present the comprehensive modeling framework Potters-Wheel (PW) including novel functionalities to satisfy these requirements with strong emphasis on the inverse problem, i.e. data-based modeling of partially observed and noisy systems like signal transduction pathways and metabolic networks. PW is designed as a MATLAB toolbox and includes numerous user interfaces. Deterministic and stochastic optimization routines are combined by fitting in logarithmic parameter space allowing for robust parameter calibration. Model investigation includes statistical tests for model-data-compliance, model discrimination, identifiability analysis and calculation of Hessian- and Monte-Carlo-based parameter confidence limits. A rich application programming interface is available for customization within own MATLAB code. Within an extensive performance analysis, we identified and significantly improved an integrator–optimizer pair which decreases the fitting duration for a realistic benchmark model by a factor over 3000 compared to MATLAB with optimization toolbox. Availability: PottersWheel is freely available for academic usage at http://www.PottersWheel.de/. The website contains a detailed documentation and introductory videos. The program has been intensively used since 2005 on Windows, Linux and Macintosh computers and does not require special MATLAB toolboxes. Contact: maiwald@fdm.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:18614583
Sensitivity-Based Guided Model Calibration
NASA Astrophysics Data System (ADS)
Semnani, M.; Asadzadeh, M.
2017-12-01
A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.
NASA Astrophysics Data System (ADS)
Bertoldi, Giacomo; Cordano, Emanuele; Brenner, Johannes; Senoner, Samuel; Della Chiesa, Stefano; Niedrist, Georg
2017-04-01
In mountain regions, the plot- and catchment-scale water and energy budgets are controlled by a complex interplay of different abiotic (i.e. topography, geology, climate) and biotic (i.e. vegetation, land management) controlling factors. When integrated, physically-based eco-hydrological models are used in mountain areas, there are a large number of parameters, topographic and boundary conditions that need to be chosen. However, data on soil and land-cover properties are relatively scarce and do not reflect the strong variability at the local scale. For this reason, tools for uncertainty quantification and optimal parameters identification are essential not only to improve model performances, but also to identify most relevant parameters to be measured in the field and to evaluate the impact of different assumptions for topographic and boundary conditions (surface, lateral and subsurface water and energy fluxes), which are usually unknown. In this contribution, we present the results of a sensitivity analysis exercise for a set of 20 experimental stations located in the Italian Alps, representative of different conditions in terms of topography (elevation, slope, aspect), land use (pastures, meadows, and apple orchards), soil type and groundwater influence. Besides micrometeorological parameters, each station provides soil water content at different depths, and in three stations (one for each land cover) eddy covariance fluxes. The aims of this work are: (I) To present an approach for improving calibration of plot-scale soil moisture and evapotranspiration (ET). (II) To identify the most sensitive parameters and relevant factors controlling temporal and spatial differences among sites. (III) Identify possible model structural deficiencies or uncertainties in boundary conditions. Simulations have been performed with the GEOtop 2.0 model, which is a physically-based, fully distributed integrated eco-hydrological model that has been specifically designed for mountain regions, since it considers the effect of topography on radiation and water fluxes and integrates a snow module. A new automatic sensitivity and optimization tool based on the Particle Swarm Optimization theory has been developed, available as R package on https://github.com/EURAC-Ecohydro/geotopOptim2. The model, once calibrated for soil and vegetation parameters, predicts the plot-scale temporal SMC dynamics of SMC and ET with a RMSE of about 0.05 m3/m3 and 40 W/m2, respectively. However, the model tends to underestimate ET during summer months over apple orchards. Results show how most sensitive parameters are both soil and canopy structural properties. However, ranking is affected by the choice of the target function and local topographic conditions. In particular, local slope/aspect influences results in stations located over hillslopes, but with marked seasonal differences. Results for locations in the valley floor are strongly controlled by the choice of the bottom water flux boundary condition. The poorer model performances in simulating ET over apple orchards could be explained by a model structural deficiency in representing the stomatal control on vapor pressure deficit for this particular type of vegetation. The results of this sensitivity could be extended to other physically distributed models, and also provide valuable insights for optimizing new experimental designs.
Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods
NASA Astrophysics Data System (ADS)
Rogers, Adam; Safi-Harb, Samar; Fiege, Jason
2015-08-01
The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.
Defining a region of optimization based on engine usage data
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-08-04
Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.
Sankaran, Sethuraman; Humphrey, Jay D.; Marsden, Alison L.
2013-01-01
Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters. PMID:23626380
Ingvarsson, Pall Thor; Yang, Mingshi; Mulvad, Helle; Nielsen, Hanne Mørck; Rantanen, Jukka; Foged, Camilla
2013-11-01
The purpose of this study was to identify and optimize spray drying parameters of importance for the design of an inhalable powder formulation of a cationic liposomal adjuvant composed of dimethyldioctadecylammonium (DDA) bromide and trehalose-6,6'-dibehenate (TDB). A quality by design (QbD) approach was applied to identify and link critical process parameters (CPPs) of the spray drying process to critical quality attributes (CQAs) using risk assessment and design of experiments (DoE), followed by identification of an optimal operating space (OOS). A central composite face-centered design was carried out followed by multiple linear regression analysis. Four CQAs were identified; the mass median aerodynamic diameter (MMAD), the liposome stability (size) during processing, the moisture content and the yield. Five CPPs (drying airflow, feed flow rate, feedstock concentration, atomizing airflow and outlet temperature) were identified and tested in a systematic way. The MMAD and the yield were successfully modeled. For the liposome size stability, the ratio between the size after and before spray drying was modeled successfully. The model for the residual moisture content was poor, although, the moisture content was below 3% in the entire design space. Finally, the OOS was drafted from the constructed models for the spray drying of trehalose stabilized DDA/TDB liposomes. The QbD approach for the spray drying process should include a careful consideration of the quality target product profile. This approach implementing risk assessment and DoE was successfully applied to optimize the spray drying of an inhalable DDA/TDB liposomal adjuvant designed for pulmonary vaccination.
NASA Technical Reports Server (NTRS)
Mohr, R. L.
1975-01-01
A set of four digital computer programs is presented which can be used to investigate the effects of instrumentation errors on the accuracy of aircraft and helicopter stability-and-control derivatives identified from flight test data. The programs assume that the differential equations of motion are linear and consist of small perturbations about a quasi-steady flight condition. It is also assumed that a Newton-Raphson optimization technique is used for identifying the estimates of the parameters. Flow charts and printouts are included.
Method of optimization onboard communication network
NASA Astrophysics Data System (ADS)
Platoshin, G. A.; Selvesuk, N. I.; Semenov, M. E.; Novikov, V. M.
2018-02-01
In this article the optimization levels of onboard communication network (OCN) are proposed. We defined the basic parameters, which are necessary for the evaluation and comparison of modern OCN, we identified also a set of initial data for possible modeling of the OCN. We also proposed a mathematical technique for implementing the OCN optimization procedure. This technique is based on the principles and ideas of binary programming. It is shown that the binary programming technique allows to obtain an inherently optimal solution for the avionics tasks. An example of the proposed approach implementation to the problem of devices assignment in OCN is considered.
An optimization method for defects reduction in fiber laser keyhole welding
NASA Astrophysics Data System (ADS)
Ai, Yuewei; Jiang, Ping; Shao, Xinyu; Wang, Chunming; Li, Peigen; Mi, Gaoyang; Liu, Yang; Liu, Wei
2016-01-01
Laser welding has been widely used in automotive, power, chemical, nuclear and aerospace industries. The quality of welded joints is closely related to the existing defects which are primarily determined by the welding process parameters. This paper proposes a defects optimization method that takes the formation mechanism of welding defects and weld geometric features into consideration. The analysis of welding defects formation mechanism aims to investigate the relationship between welding defects and process parameters, and weld features are considered to identify the optimal process parameters for the desired welded joints with minimum defects. The improved back-propagation neural network possessing good modeling for nonlinear problems is adopted to establish the mathematical model and the obtained model is solved by genetic algorithm. The proposed method is validated by macroweld profile, microstructure and microhardness in the confirmation tests. The results show that the proposed method is effective at reducing welding defects and obtaining high-quality joints for fiber laser keyhole welding in practical production.
Parameter identification of thermophilic anaerobic degradation of valerate.
Flotats, Xavier; Ahring, Birgitte K; Angelidaki, Irini
2003-01-01
The considered mathematical model of the decomposition of valerate presents three unknown kinetic parameters, two unknown stoichiometric coefficients, and three unknown initial concentrations for biomass. Applying a structural identifiability study, we concluded that it is necessary to perform simultaneous batch experiments with different initial conditions for estimating these parameters. Four simultaneous batch experiments were conducted at 55 degrees C, characterized by four different initial acetate concentrations. Product inhibition of valerate degradation by acetate was considered. Practical identification was done optimizing the sum of the multiple determination coefficients for all measured state variables and for all experiments simultaneously. The estimated values of kinetic parameters and stoichiometric coefficients were characterized by the parameter correlation matrix, the confidence interval, and the student's t-test at 5% significance level with positive results except for the saturation constant, for which more experiments for improving its identifiability should be conducted. In this article, we discuss kinetic parameter estimation methods.
Chen, Fei-Fei; Wu, Yan; Ge, Fa-Huan
2012-03-01
To optimize the extraction conditions of Prunus armeniaca oil by Supercritical CO2 extraction and identify its components by GC-MS. Optimized of SFE-CO extraction by response surface methodology and used GC-MS to analysis Prunus armeniaca oil compounds. Established the model of an equation for the extraction rate of Prunus armeniaca oil by supercritical CO2 extraction, and the optimal parameters for the supercritical CO2 extraction determined by the equation were: the extraction pressure was 27 MPa, temperature was 39 degrees C, the extraction rate of Prunus armeniaca oil was 44.5%. 16 main compounds of Prunus armeniaca oil extracted by supercritical CO2 were identified by GC-MS, unsaturated fatty acids were 92.6%. This process is simple, and can be used for the extraction of Prunus armeniaca oil.
Optimization of vascular-targeting drugs in a computational model of tumor growth
NASA Astrophysics Data System (ADS)
Gevertz, Jana
2012-04-01
A biophysical tool is introduced that seeks to provide a theoretical basis for helping drug design teams assess the most promising drug targets and design optimal treatment strategies. The tool is grounded in a previously validated computational model of the feedback that occurs between a growing tumor and the evolving vasculature. In this paper, the model is particularly used to explore the therapeutic effectiveness of two drugs that target the tumor vasculature: angiogenesis inhibitors (AIs) and vascular disrupting agents (VDAs). Using sensitivity analyses, the impact of VDA dosing parameters is explored, as is the effects of administering a VDA with an AI. Further, a stochastic optimization scheme is utilized to identify an optimal dosing schedule for treatment with an AI and a chemotherapeutic. The treatment regimen identified can successfully halt simulated tumor growth, even after the cessation of therapy.
NASA Astrophysics Data System (ADS)
Vallières, Martin; Laberge, Sébastien; Diamant, André; El Naqa, Issam
2017-11-01
Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice (‘span’). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of 0.84 +/- 0.01 in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters (p = 0.04 ), with an average AUC of 0.89 +/- 0.01 . Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.
Wang, Hong-Hua
2014-01-01
A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision. PMID:25243233
An optimized ensemble local mean decomposition method for fault detection of mechanical components
NASA Astrophysics Data System (ADS)
Zhang, Chao; Li, Zhixiong; Hu, Chao; Chen, Shuai; Wang, Jianguo; Zhang, Xiaogang
2017-03-01
Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error (Relative RMSE) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE, corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions.
Analysis and design of a genetic circuit for dynamic metabolic engineering.
Anesiadis, Nikolaos; Kobayashi, Hideki; Cluett, William R; Mahadevan, Radhakrishnan
2013-08-16
Recent advances in synthetic biology have equipped us with new tools for bioprocess optimization at the genetic level. Previously, we have presented an integrated in silico design for the dynamic control of gene expression based on a density-sensing unit and a genetic toggle switch. In the present paper, analysis of a serine-producing Escherichia coli mutant shows that an instantaneous ON-OFF switch leads to a maximum theoretical productivity improvement of 29.6% compared to the mutant. To further the design, global sensitivity analysis is applied here to a mathematical model of serine production in E. coli coupled with a genetic circuit. The model of the quorum sensing and the toggle switch involves 13 parameters of which 3 are identified as having a significant effect on serine concentration. Simulations conducted in this reduced parameter space further identified the optimal ranges for these 3 key parameters to achieve productivity values close to the maximum theoretical values. This analysis can now be used to guide the experimental implementation of a dynamic metabolic engineering strategy and reduce the time required to design the genetic circuit components.
NASA Astrophysics Data System (ADS)
Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei
2018-03-01
Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.
Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model
NASA Astrophysics Data System (ADS)
Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr
2017-10-01
Issues concerning the advanced numerical analysis of concrete building structures in sophisticated computing systems currently require the involvement of nonlinear mechanics tools. The efforts to design safer, more durable and mainly more economically efficient concrete structures are supported via the use of advanced nonlinear concrete material models and the geometrically nonlinear approach. The application of nonlinear mechanics tools undoubtedly presents another step towards the approximation of the real behaviour of concrete building structures within the framework of computer numerical simulations. However, the success rate of this application depends on having a perfect understanding of the behaviour of the concrete material models used and having a perfect understanding of the used material model parameters meaning. The effective application of nonlinear concrete material models within computer simulations often becomes very problematic because these material models very often contain parameters (material constants) whose values are difficult to obtain. However, getting of the correct values of material parameters is very important to ensure proper function of a concrete material model used. Today, one possibility, which permits successful solution of the mentioned problem, is the use of optimization algorithms for the purpose of the optimization-based inverse material parameter identification. Parameter identification goes hand in hand with experimental investigation while it trying to find parameter values of the used material model so that the resulting data obtained from the computer simulation will best approximate the experimental data. This paper is focused on the optimization-based inverse identification of the parameters of a concrete cap material model which is known under the name the Continuous Surface Cap Model. Within this paper, material parameters of the model are identified on the basis of interaction between nonlinear computer simulations, gradient based and nature inspired optimization algorithms and experimental data, the latter of which take the form of a load-extension curve obtained from the evaluation of uniaxial tensile test results. The aim of this research was to obtain material model parameters corresponding to the quasi-static tensile loading which may be further used for the research involving dynamic and high-speed tensile loading. Based on the obtained results it can be concluded that the set goal has been reached.
Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan
2012-01-01
Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727
Systematic parameter inference in stochastic mesoscopic modeling
NASA Astrophysics Data System (ADS)
Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.
Connesson, N.; Clayton, E.H.; Bayly, P.V.; Pierron, F.
2015-01-01
In-vivo measurement of the mechanical properties of soft tissues is essential to provide necessary data in biomechanics and medicine (early cancer diagnosis, study of traumatic brain injuries, etc.). Imaging techniques such as Magnetic Resonance Elastography (MRE) can provide 3D displacement maps in the bulk and in vivo, from which, using inverse methods, it is then possible to identify some mechanical parameters of the tissues (stiffness, damping etc.). The main difficulties in these inverse identification procedures consist in dealing with the pressure waves contained in the data and with the experimental noise perturbing the spatial derivatives required during the processing. The Optimized Virtual Fields Method (OVFM) [1], designed to be robust to noise, present natural and rigorous solution to deal with these problems. The OVFM has been adapted to identify material parameter maps from Magnetic Resonance Elastography (MRE) data consisting of 3-dimensional displacement fields in harmonically loaded soft materials. In this work, the method has been developed to identify elastic and viscoelastic models. The OVFM sensitivity to spatial resolution and to noise has been studied by analyzing 3D analytically simulated displacement data. This study evaluates and describes the OVFM identification performances: different biases on the identified parameters are induced by the spatial resolution and experimental noise. The well-known identification problems in the case of quasi-incompressible materials also find a natural solution in the OVFM. Moreover, an a posteriori criterion to estimate the local identification quality is proposed. The identification results obtained on actual experiments are briefly presented. PMID:26146416
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less
Warpage optimization on a mobile phone case using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Lee, X. N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Shazzuan, S.
2017-09-01
Plastic injection moulding is a popular manufacturing method not only it is reliable, but also efficient and cost saving. It able to produce plastic part with detailed features and complex geometry. However, defects in injection moulding process degrades the quality and aesthetic of the injection moulded product. The most common defect occur in the process is warpage. Inappropriate process parameter setting of injection moulding machine is one of the reason that leads to the occurrence of warpage. The aims of this study were to improve the quality of injection moulded part by investigating the optimal parameters in minimizing warpage using Response Surface Methodology (RSM). Subsequent to this, the most significant parameter was identified and recommended parameters setting was compared with the optimized parameter setting using RSM. In this research, the mobile phone case was selected as case study. The mould temperature, melt temperature, packing pressure, packing time and cooling time were selected as variables whereas warpage in y-direction was selected as responses in this research. The simulation was carried out by using Autodesk Moldflow Insight 2012. In addition, the RSM was performed by using Design Expert 7.0. The warpage in y direction recommended by RSM were reduced by 70 %. RSM performed well in solving warpage issue.
NASA Astrophysics Data System (ADS)
Sivandran, Gajan; Bras, Rafael L.
2012-12-01
In semiarid regions, the rooting strategies employed by vegetation can be critical to its survival. Arid regions are characterized by high variability in the arrival of rainfall, and species found in these areas have adapted mechanisms to ensure the capture of this scarce resource. Vegetation roots have strong control over this partitioning, and assuming a static root profile, predetermine the manner in which this partitioning is undertaken.A coupled, dynamic vegetation and hydrologic model, tRIBS + VEGGIE, was used to explore the role of vertical root distribution on hydrologic fluxes. Point-scale simulations were carried out using two spatially and temporally invariant rooting schemes: uniform: a one-parameter model and logistic: a two-parameter model. The simulations were forced with a stochastic climate generator calibrated to weather stations and rain gauges in the semiarid Walnut Gulch Experimental Watershed (WGEW) in Arizona. A series of simulations were undertaken exploring the parameter space of both rooting schemes and the optimal root distribution for the simulation, which was defined as the root distribution with the maximum mean transpiration over a 100-yr period, and this was identified. This optimal root profile was determined for five generic soil textures and two plant-functional types (PFTs) to illustrate the role of soil texture on the partitioning of moisture at the land surface. The simulation results illustrate the strong control soil texture has on the partitioning of rainfall and consequently the depth of the optimal rooting profile. High-conductivity soils resulted in the deepest optimal rooting profile with land surface moisture fluxes dominated by transpiration. As we move toward the lower conductivity end of the soil spectrum, a shallowing of the optimal rooting profile is observed and evaporation gradually becomes the dominate flux from the land surface. This study offers a methodology through which local plant, soil, and climate can be accounted for in the parameterization of rooting profiles in semiarid regions.
Development and evaluation of paclitaxel nanoparticles using a quality-by-design approach.
Yerlikaya, Firat; Ozgen, Aysegul; Vural, Imran; Guven, Olgun; Karaagaoglu, Ergun; Khan, Mansoor A; Capan, Yilmaz
2013-10-01
The aims of this study were to develop and characterize paclitaxel nanoparticles, to identify and control critical sources of variability in the process, and to understand the impact of formulation and process parameters on the critical quality attributes (CQAs) using a quality-by-design (QbD) approach. For this, a risk assessment study was performed with various formulation and process parameters to determine their impact on CQAs of nanoparticles, which were determined to be average particle size, zeta potential, and encapsulation efficiency. Potential risk factors were identified using an Ishikawa diagram and screened by Plackett-Burman design and finally nanoparticles were optimized using Box-Behnken design. The optimized formulation was further characterized by Fourier transform infrared spectroscopy, X-ray diffractometry, differential scanning calorimetry, scanning electron microscopy, atomic force microscopy, and gas chromatography. It was observed that paclitaxel transformed from crystalline state to amorphous state while totally encapsulating into the nanoparticles. The nanoparticles were spherical, smooth, and homogenous with no dichloromethane residue. In vitro cytotoxicity test showed that the developed nanoparticles are more efficient than free paclitaxel in terms of antitumor activity (more than 25%). In conclusion, this study demonstrated that understanding formulation and process parameters with the philosophy of QbD is useful for the optimization of complex drug delivery systems. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.
Contrast Media Administration in Coronary Computed Tomography Angiography - A Systematic Review.
Mihl, Casper; Maas, Monique; Turek, Jakub; Seehofnerova, Anna; Leijenaar, Ralph T H; Kok, Madeleine; Lobbes, Marc B I; Wildberger, Joachim E; Das, Marco
2017-04-01
Background Various different injection parameters influence enhancement of the coronary arteries. There is no consensus in the literature regarding the optimal contrast media (CM) injection protocol. The aim of this study is to provide an update on the effect of different CM injection parameters on the coronary attenuation in coronary computed tomographic angiography (CCTA). Method Studies published between January 2001 and May 2014 identified by Pubmed, Embase and MEDLINE were evaluated. Using predefined inclusion criteria and a data extraction form, the content of each eligible study was assessed. Initially, 2551 potential studies were identified. After applying our criteria, 36 studies were found to be eligible. Studies were systematically assessed for quality based on the validated Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-II checklist. Results Extracted data proved to be heterogeneous and often incomplete. The injection protocol and outcome of the included publications were very diverse and results are difficult to compare. Based on the extracted data, it remains unclear which of the injection parameters is the most important determinant for adequate attenuation. It is likely that one parameter which combines multiple parameters (e. g. IDR) will be the most suitable determinant of coronary attenuation in CCTA protocols. Conclusion Research should be directed towards determining the influence of different injection parameters and defining individualized optimal IDRs tailored to patient-related factors (ideally in large randomized trials). Key points · This systematic review provides insight into decisive factors on coronary attenuation.. · Different and contradicting outcomes are reported on coronary attenuation in CCTA.. · One parameter combining multiple parameters (IDR) is likely decisive in coronary attenuation.. · Research should aim at defining individualized optimal IDRs tailored to individual factors.. · Future directions should be tailored towards the influence of different injection parameters.. Citation Format · Mihl C, Maas M, Turek J et al. Contrast Media Administration in Coronary Computed Tomography Angiography - A Systematic Review. Fortschr Röntgenstr 2017; 189: 312 - 325. © Georg Thieme Verlag KG Stuttgart · New York.
Dynamic parameter identification of robot arms with servo-controlled electrical motors
NASA Astrophysics Data System (ADS)
Jiang, Zhao-Hui; Senda, Hiroshi
2005-12-01
This paper addresses the issue of dynamic parameter identification of the robot manipulator with servo-controlled electrical motors. An assumption is made that all kinematical parameters, such as link lengths, are known, and only dynamic parameters containing mass, moment of inertia, and their functions need to be identified. First, we derive dynamics of the robot arm with a linear form of the unknown dynamic parameters by taking dynamic characteristics of the motor and servo unit into consideration. Then, we implement the parameter identification approach to identify the unknown parameters with respect to individual link separately. A pseudo-inverse matrix is used for formulation of the parameter identification. The optimal solution is guaranteed in a sense of least-squares of the mean errors. A Direct Drive (DD) SCARA type industrial robot arm AdeptOne is used as an application example of the parameter identification. Simulations and experiments for both open loop and close loop controls are carried out. Comparison of the results confirms the correctness and usefulness of the parameter identification and the derived dynamic model.
Modal parameter identification of a CMUT membrane using response data only
NASA Astrophysics Data System (ADS)
Lardiès, Joseph; Bourbon, Gilles; Moal, Patrice Le; Kacem, Najib; Walter, Vincent; Le, Thien-Phu
2018-03-01
Capacitive micromachined ultrasonic transducers (CMUTs) are microelectromechanical systems used for the generation of ultrasounds. The fundamental element of the transducer is a clamped thin metallized membrane that vibrates under voltage variations. To control such oscillations and to optimize its dynamic response it is necessary to know the modal parameters of the membrane such as resonance frequency, damping and stiffness coefficients. The purpose of this work is to identify these parameters using only the time data obtained from the membrane center displacement. Dynamic measurements are conducted in time domain and we use two methods to identify the modal parameters: a subspace method based on an innovation model of the state-space representation and the continuous wavelet transform method based on the use of the ridge of the wavelet transform of the displacement. Experimental results are presented showing the effectiveness of these two procedures in modal parameter identification.
Bouc-Wen hysteresis model identification using Modified Firefly Algorithm
NASA Astrophysics Data System (ADS)
Zaman, Mohammad Asif; Sikder, Urmita
2015-12-01
The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.
A modified Leslie-Gower predator-prey interaction model and parameter identifiability
NASA Astrophysics Data System (ADS)
Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed
2018-01-01
In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.
Welded joints integrity analysis and optimization for fiber laser welding of dissimilar materials
NASA Astrophysics Data System (ADS)
Ai, Yuewei; Shao, Xinyu; Jiang, Ping; Li, Peigen; Liu, Yang; Liu, Wei
2016-11-01
Dissimilar materials welded joints provide many advantages in power, automotive, chemical, and spacecraft industries. The weld bead integrity which is determined by process parameters plays a significant role in the welding quality during the fiber laser welding (FLW) of dissimilar materials. In this paper, an optimization method by taking the integrity of the weld bead and weld area into consideration is proposed for FLW of dissimilar materials, the low carbon steel and stainless steel. The relationships between the weld bead integrity and process parameters are developed by the genetic algorithm optimized back propagation neural network (GA-BPNN). The particle swarm optimization (PSO) algorithm is taken for optimizing the predicted outputs from GA-BPNN for the objective. Through the optimization process, the desired weld bead with good integrity and minimum weld area are obtained and the corresponding microstructure and microhardness are excellent. The mechanical properties of the optimized joints are greatly improved compared with that of the un-optimized welded joints. Moreover, the effects of significant factors are analyzed based on the statistical approach and the laser power (LP) is identified as the most significant factor on the weld bead integrity and weld area. The results indicate that the proposed method is effective for improving the reliability and stability of welded joints in the practical production.
Kwok, T; Smith, K A
2000-09-01
The aim of this paper is to study both the theoretical and experimental properties of chaotic neural network (CNN) models for solving combinatorial optimization problems. Previously we have proposed a unifying framework which encompasses the three main model types, namely, Chen and Aihara's chaotic simulated annealing (CSA) with decaying self-coupling, Wang and Smith's CSA with decaying timestep, and the Hopfield network with chaotic noise. Each of these models can be represented as a special case under the framework for certain conditions. This paper combines the framework with experimental results to provide new insights into the effect of the chaotic neurodynamics of each model. By solving the N-queen problem of various sizes with computer simulations, the CNN models are compared in different parameter spaces, with optimization performance measured in terms of feasibility, efficiency, robustness and scalability. Furthermore, characteristic chaotic neurodynamics crucial to effective optimization are identified, together with a guide to choosing the corresponding model parameters.
Peng, Jiansheng; Meng, Fanmei; Ai, Yuncan
2013-06-01
The artificial neural network (ANN) and genetic algorithm (GA) were combined to optimize the fermentation process for enhancing production of marine bacteriocin 1701 in a 5-L-stirred-tank. Fermentation time, pH value, dissolved oxygen level, temperature and turbidity were used to construct a "5-10-1" ANN topology to identify the nonlinear relationship between fermentation parameters and the antibiotic effects (shown as in inhibition diameters) of bacteriocin 1701. The predicted values by the trained ANN model were coincided with the observed ones (the coefficient of R(2) was greater than 0.95). As the fermentation time was brought in as one of the ANN input nodes, fermentation parameters could be optimized by stages through GA, and an optimal fermentation process control trajectory was created. The production of marine bacteriocin 1701 was significantly improved by 26% under the guidance of fermentation control trajectory that was optimized by using of combined ANN-GA method. Copyright © 2013 Elsevier Ltd. All rights reserved.
An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.
Chen, Lei; Wei, Guoyuan; Shen, Zhenyao
2015-10-21
To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.
Ghacham, Alia Ben; Pasquier, Louis-César; Cecchi, Emmanuelle; Blais, Jean-François; Mercier, Guy
2016-09-01
This work focuses on the influence of different parameters on the efficiency of steel slag carbonation in slurry phase under ambient temperature. In the first part, a response surface methodology was used to identify the effect and the interactions of the gas pressure, liquid/solid (L/S) ratio, gas/liquid ratio (G/L), and reaction time on the CO2 removed/sample and to optimize the parameters. In the second part, the parameters' effect on the dissolution of CO2 and its conversion into carbonates were studied more in detail. The results show that the pressure and the G/L ratio have a positive effect on both the dissolution and the conversion of CO2. These results have been correlated with the higher CO2 mass introduced in the reactor. On the other hand, an important effect of the L/S ratio on the overall CO2 removal and more specifically on the carbonate precipitation has been identified. The best results were obtained L/S ratios of 4:1 and 10:1 with respectively 0.046 and 0.052 gCO2 carbonated/g sample. These yields were achieved after 10 min reaction, at ambient temperature, and 10.68 bar of total gas pressure following direct gas treatment.
NASA Astrophysics Data System (ADS)
Jalligampala, Archana; Sekhar, Sudarshan; Zrenner, Eberhart; Rathbun, Daniel L.
2017-04-01
To further improve the quality of visual percepts elicited by microelectronic retinal prosthetics, substantial efforts have been made to understand how retinal neurons respond to electrical stimulation. It is generally assumed that a sufficiently strong stimulus will recruit most retinal neurons. However, recent evidence has shown that the responses of some retinal neurons decrease with excessively strong stimuli (a non-monotonic response function). Therefore, it is necessary to identify stimuli that can be used to activate the majority of retinal neurons even when such non-monotonic cells are part of the neuronal population. Taking these non-monotonic responses into consideration, we establish the optimal voltage stimulation parameters (amplitude, duration, and polarity) for epiretinal stimulation of network-mediated (indirect) ganglion cell responses. We recorded responses from 3958 mouse retinal ganglion cells (RGCs) in both healthy (wild type, WT) and a degenerating (rd10) mouse model of retinitis pigmentosa—using flat-mounted retina on a microelectrode array. Rectangular monophasic voltage-controlled pulses were presented with varying voltage, duration, and polarity. We found that in 4-5 weeks old rd10 mice the RGC thresholds were comparable to those of WT. There was a marked response variability among mouse RGCs. To account for this variability, we interpolated the percentage of RGCs activated at each point in the voltage-polarity-duration stimulus space, thus identifying the optimal voltage-controlled pulse (-2.4 V, 0.88 ms). The identified optimal voltage pulse can activate at least 65% of potentially responsive RGCs in both mouse strains. Furthermore, this pulse is well within the range of stimuli demonstrated to be safe and effective for retinal implant patients. Such optimized stimuli and the underlying method used to identify them support a high yield of responsive RGCs and will serve as an effective guideline for future in vitro investigations of retinal electrostimulation by establishing standard stimuli for each unique experimental condition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Wesley; Sattarivand, Mike
Objective: To optimize dual-energy parameters of ExacTrac stereoscopic x-ray imaging system for lung SBRT patients Methods: Simulated spectra and a lung phantom were used to optimize filter material, thickness, kVps, and weighting factors to obtain bone subtracted dual-energy images. Spektr simulations were used to identify material in the atomic number (Z) range [3–83] based on a metric defined to separate spectrums of high and low energies. Both energies used the same filter due to time constraints of image acquisition in lung SBRT imaging. A lung phantom containing bone, soft tissue, and a tumor mimicking material was imaged with filter thicknessesmore » range [0–1] mm and kVp range [60–140]. A cost function based on contrast-to-noise-ratio of bone, soft tissue, and tumor, as well as image noise content, was defined to optimize filter thickness and kVp. Using the optimized parameters, dual-energy images of anthropomorphic Rando phantom were acquired and evaluated for bone subtraction. Imaging dose was measured with dual-energy technique using tin filtering. Results: Tin was the material of choice providing the best energy separation, non-toxicity, and non-reactiveness. The best soft-tissue-only image in the lung phantom was obtained using 0.3 mm tin and [140, 80] kVp pair. Dual-energy images of the Rando phantom had noticeable bone elimination when compared to no filtration. Dose was lower with tin filtering compared to no filtration. Conclusions: Dual-energy soft-tissue imaging is feasible using ExacTrac stereoscopic imaging system utilizing a single tin filter for both high and low energies and optimized acquisition parameters.« less
Determination of the optimal mesh parameters for Iguassu centrifuge flow and separation calculations
NASA Astrophysics Data System (ADS)
Romanihin, S. M.; Tronin, I. V.
2016-09-01
We present the method and the results of the determination for optimal computational mesh parameters for axisymmetric modeling of flow and separation in the Iguasu gas centrifuge. The aim of this work was to determine the mesh parameters which provide relatively low computational cost whithout loss of accuracy. We use direct search optimization algorithm to calculate optimal mesh parameters. Obtained parameters were tested by the calculation of the optimal working regime of the Iguasu GC. Separative power calculated using the optimal mesh parameters differs less than 0.5% from the result obtained on the detailed mesh. Presented method can be used to determine optimal mesh parameters of the Iguasu GC with different rotor speeds.
Penny, Christian; Grothendick, Beau; Zhang, Lin; Borror, Connie M.; Barbano, Duane; Cornelius, Angela J.; Gilpin, Brent J.; Fagerquist, Clifton K.; Zaragoza, William J.; Jay-Russell, Michele T.; Lastovica, Albert J.; Ragimbeau, Catherine; Cauchie, Henry-Michel; Sandrin, Todd R.
2016-01-01
MALDI-TOF MS has been utilized as a reliable and rapid tool for microbial fingerprinting at the genus and species levels. Recently, there has been keen interest in using MALDI-TOF MS beyond the genus and species levels to rapidly identify antibiotic resistant strains of bacteria. The purpose of this study was to enhance strain level resolution for Campylobacter jejuni through the optimization of spectrum processing parameters using a series of designed experiments. A collection of 172 strains of C. jejuni were collected from Luxembourg, New Zealand, North America, and South Africa, consisting of four groups of antibiotic resistant isolates. The groups included: (1) 65 strains resistant to cefoperazone (2) 26 resistant to cefoperazone and beta-lactams (3) 5 strains resistant to cefoperazone, beta-lactams, and tetracycline, and (4) 76 strains resistant to cefoperazone, teicoplanin, amphotericin, B and cephalothin. Initially, a model set of 16 strains (three biological replicates and three technical replicates per isolate, yielding a total of 144 spectra) of C. jejuni was subjected to each designed experiment to enhance detection of antibiotic resistance. The most optimal parameters were applied to the larger collection of 172 isolates (two biological replicates and three technical replicates per isolate, yielding a total of 1,031 spectra). We observed an increase in antibiotic resistance detection whenever either a curve based similarity coefficient (Pearson or ranked Pearson) was applied rather than a peak based (Dice) and/or the optimized preprocessing parameters were applied. Increases in antimicrobial resistance detection were scored using the jackknife maximum similarity technique following cluster analysis. From the first four groups of antibiotic resistant isolates, the optimized preprocessing parameters increased detection respective to the aforementioned groups by: (1) 5% (2) 9% (3) 10%, and (4) 2%. An additional second categorization was created from the collection consisting of 31 strains resistant to beta-lactams and 141 strains sensitive to beta-lactams. Applying optimal preprocessing parameters, beta-lactam resistance detection was increased by 34%. These results suggest that spectrum processing parameters, which are rarely optimized or adjusted, affect the performance of MALDI-TOF MS-based detection of antibiotic resistance and can be fine-tuned to enhance screening performance. PMID:27303397
Gómez, Pablo; Patel, Rita R.; Alexiou, Christoph; Bohr, Christopher; Schützenberger, Anne
2017-01-01
Motivation Human voice is generated in the larynx by the two oscillating vocal folds. Owing to the limited space and accessibility of the larynx, endoscopic investigation of the actual phonatory process in detail is challenging. Hence the biomechanics of the human phonatory process are still not yet fully understood. Therefore, we adapt a mathematical model of the vocal folds towards vocal fold oscillations to quantify gender and age related differences expressed by computed biomechanical model parameters. Methods The vocal fold dynamics are visualized by laryngeal high-speed videoendoscopy (4000 fps). A total of 33 healthy young subjects (16 females, 17 males) and 11 elderly subjects (5 females, 6 males) were recorded. A numerical two-mass model is adapted to the recorded vocal fold oscillations by varying model masses, stiffness and subglottal pressure. For adapting the model towards the recorded vocal fold dynamics, three different optimization algorithms (Nelder–Mead, Particle Swarm Optimization and Simulated Bee Colony) in combination with three cost functions were considered for applicability. Gender differences and age-related kinematic differences reflected by the model parameters were analyzed. Results and conclusion The biomechanical model in combination with numerical optimization techniques allowed phonatory behavior to be simulated and laryngeal parameters involved to be quantified. All three optimization algorithms showed promising results. However, only one cost function seems to be suitable for this optimization task. The gained model parameters reflect the phonatory biomechanics for men and women well and show quantitative age- and gender-specific differences. The model parameters for younger females and males showed lower subglottal pressures, lower stiffness and higher masses than the corresponding elderly groups. Females exhibited higher subglottal pressures, smaller oscillation masses and larger stiffness than the corresponding similar aged male groups. Optimizing numerical models towards vocal fold oscillations is useful to identify underlying laryngeal components controlling the phonatory process. PMID:29121085
A Novel Protocol for Model Calibration in Biological Wastewater Treatment
Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen
2015-01-01
Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959
NASA Astrophysics Data System (ADS)
Narang, H. K.; Mahapatra, M. M.; Jha, P. K.; Biswas, P.
2014-05-01
Autogenous arc welds with minimum upper weld bead depression and lower weld bead bulging are desired as such welds do not require a second welding pass for filling up the upper bead depressions (UBDs) and characterized with minimum angular distortion. The present paper describes optimization and prediction of angular distortion and weldment characteristics such as upper weld bead depression and lower weld bead bulging of TIG-welded structural steel square butt joints. Full factorial design of experiment was utilized for selecting the combinations of welding process parameter to produce the square butts. A mathematical model was developed to establish the relationship between TIG welding process parameters and responses such as upper bead width, lower bead width, UBD, lower bead height (bulging), weld cross-sectional area, and angular distortions. The optimal welding condition to minimize UBD and lower bead bulging of the TIG butt joints was identified.
Systematic parameter inference in stochastic mesoscopic modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Li, Zhen
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the priormore » knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.« less
Riches, S F; Payne, G S; Morgan, V A; Dearnaley, D; Morgan, S; Partridge, M; Livni, N; Ogden, C; deSouza, N M
2015-05-01
The objectives are determine the optimal combination of MR parameters for discriminating tumour within the prostate using linear discriminant analysis (LDA) and to compare model accuracy with that of an experienced radiologist. Multiparameter MRIs in 24 patients before prostatectomy were acquired. Tumour outlines from whole-mount histology, T2-defined peripheral zone (PZ), and central gland (CG) were superimposed onto slice-matched parametric maps. T2, Apparent Diffusion Coefficient, initial area under the gadolinium curve, vascular parameters (K(trans),Kep,Ve), and (choline+polyamines+creatine)/citrate were compared between tumour and non-tumour tissues. Receiver operating characteristic (ROC) curves determined sensitivity and specificity at spectroscopic voxel resolution and per lesion, and LDA determined the optimal multiparametric model for identifying tumours. Accuracy was compared with an expert observer. Tumours were significantly different from PZ and CG for all parameters (all p < 0.001). Area under the ROC curve for discriminating tumour from non-tumour was significantly greater (p < 0.001) for the multiparametric model than for individual parameters; at 90 % specificity, sensitivity was 41 % (MRSI voxel resolution) and 59 % per lesion. At this specificity, an expert observer achieved 28 % and 49 % sensitivity, respectively. The model was more accurate when parameters from all techniques were included and performed better than an expert observer evaluating these data. • The combined model increases diagnostic accuracy in prostate cancer compared with individual parameters • The optimal combined model includes parameters from diffusion, spectroscopy, perfusion, and anatominal MRI • The computed model improves tumour detection compared to an expert viewing parametric maps.
Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.
Talaei, Behzad; Jagannathan, Sarangapani; Singler, John
2018-04-01
This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.
Detection of quantitative trait loci affecting response to crowding stress in rainbow trout
USDA-ARS?s Scientific Manuscript database
Aquaculture environmental stressors such as handling, overcrowding, sub-optimal water quality parameters and social interactions negatively impact growth, feed intake, feed efficiency, disease resistance, flesh quality and reproductive performance in rainbow trout. To identify QTL affecting response...
Identifying differentially expressed genes in cancer patients using a non-parameter Ising model.
Li, Xumeng; Feltus, Frank A; Sun, Xiaoqian; Wang, James Z; Luo, Feng
2011-10-01
Identification of genes and pathways involved in diseases and physiological conditions is a major task in systems biology. In this study, we developed a novel non-parameter Ising model to integrate protein-protein interaction network and microarray data for identifying differentially expressed (DE) genes. We also proposed a simulated annealing algorithm to find the optimal configuration of the Ising model. The Ising model was applied to two breast cancer microarray data sets. The results showed that more cancer-related DE sub-networks and genes were identified by the Ising model than those by the Markov random field model. Furthermore, cross-validation experiments showed that DE genes identified by Ising model can improve classification performance compared with DE genes identified by Markov random field model. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimization of a pressure control valve for high power automatic transmission considering stability
NASA Astrophysics Data System (ADS)
Jian, Hongchao; Wei, Wei; Li, Hongcai; Yan, Qingdong
2018-02-01
The pilot-operated electrohydraulic clutch-actuator system is widely utilized by high power automatic transmission because of the demand of large flowrate and the excellent pressure regulating capability. However, a self-excited vibration induced by the inherent non-linear characteristics of valve spool motion coupled with the fluid dynamics can be generated during the working state of hydraulic systems due to inappropriate system parameters, which causes sustaining instability in the system and leads to unexpected performance deterioration and hardware damage. To ensure a stable and fast response performance of the clutch actuator system, an optimal design method for the pressure control valve considering stability is proposed in this paper. A non-linear dynamic model of the clutch actuator system is established based on the motion of the valve spool and coupling fluid dynamics in the system. The stability boundary in the parameter space is obtained by numerical stability analysis. Sensitivity of the stability boundary and output pressure response time corresponding to the valve parameters are identified using design of experiment (DOE) approach. The pressure control valve is optimized using particle swarm optimization (PSO) algorithm with the stability boundary as constraint. The simulation and experimental results reveal that the optimization method proposed in this paper helps in improving the response characteristics while ensuring the stability of the clutch actuator system during the entire gear shift process.
NASA Astrophysics Data System (ADS)
Cheng, Song; Zhang, Shengzhou; Zhang, Libo; Xia, Hongying; Peng, Jinhui; Wang, Shixing
2017-09-01
Eupatorium adenophorum, global exotic weeds, was utilized as feedstock for preparation of activated carbon (AC) via microwave-induced KOH activation. Influences of the three vital process parameters - microwave power, activation time and impregnation ratio (IR) - have been assessed on the adsorption capacity and yield of AC. The process parameters were optimized utilizing the Design Expert software and were identified to be a microwave power of 700 W, an activation time of 15 min and an IR of 4, with the resultant iodine adsorption number and yield being 2,621 mg/g and 28.25 %, respectively. The key parameters that characterize the AC such as the brunauer emmett teller (BET) surface area, total pore volume and average pore diameter were estimated to be 3,918 m2/g, 2,383 ml/g and 2.43 nm, respectively, under the optimized process conditions. The surface characteristics of AC were characterized by Fourier transform infrared spectroscopy, scanning electron microscope and Transmission electron microscope.
NASA Astrophysics Data System (ADS)
Kuhn, A. M.; Fennel, K.; Bianucci, L.
2016-02-01
A key feature of the North Atlantic Ocean's biological dynamics is the annual phytoplankton spring bloom. In the region comprising the continental shelf and adjacent deep ocean of the northwest North Atlantic, we identified two patterns of bloom development: 1) locations with cold temperatures and deep winter mixed layers, where the spring bloom peaks around April and the annual chlorophyll cycle has a large amplitude, and 2) locations with warmer temperatures and shallow winter mixed layers, where the spring bloom peaks earlier in the year, sometimes indiscernible from the fall bloom. These patterns result from a combination of limiting environmental factors and interactions among planktonic groups with different optimal requirements. Simple models that represent the ecosystem with a single phytoplankton (P) and a single zooplankton (Z) group are challenged to reproduce these ecological interactions. Here we investigate the effect that added complexity has on determining spatio-temporal chlorophyll. We compare two ecosystem models, one that contains one P and one Z group, and one with two P and three Z groups. We consider three types of changes in complexity: 1) added dependencies among variables (e.g., temperature dependent rates), 2) modified structural pathways, and 3) added pathways. Subsets of the most sensitive parameters are optimized in each model to replicate observations in the region. For computational efficiency, the parameter optimization is performed using 1D surrogates of a 3D model. We evaluate how model complexity affects model skill, and whether the optimized parameter sets found for each model modify the interpretation of ecosystem functioning. Spatial differences in the parameter sets that best represent different areas hint at the existence of different ecological communities or at physical-biological interactions that are not represented in the simplest model. Our methodology emphasizes the combined use of observations, 1D models to help identifying patterns, and 3D models able to simulate the environment modre realistically, as a means to acquire predictive understanding of the ocean's ecology.
WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scarpelli, M; Eickhoff, J; Perlman, S
Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test wasmore » used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log transformation was found to be close to optimal and sufficient for obtaining normally distributed FLT PET SUVs. These transformations allow utilization of powerful LMEMs when analyzing quantitative imaging metrics.« less
Stochastic Optimization For Water Resources Allocation
NASA Astrophysics Data System (ADS)
Yamout, G.; Hatfield, K.
2003-12-01
For more than 40 years, water resources allocation problems have been addressed using deterministic mathematical optimization. When data uncertainties exist, these methods could lead to solutions that are sub-optimal or even infeasible. While optimization models have been proposed for water resources decision-making under uncertainty, no attempts have been made to address the uncertainties in water allocation problems in an integrated approach. This paper presents an Integrated Dynamic, Multi-stage, Feedback-controlled, Linear, Stochastic, and Distributed parameter optimization approach to solve a problem of water resources allocation. It attempts to capture (1) the conflict caused by competing objectives, (2) environmental degradation produced by resource consumption, and finally (3) the uncertainty and risk generated by the inherently random nature of state and decision parameters involved in such a problem. A theoretical system is defined throughout its different elements. These elements consisting mainly of water resource components and end-users are described in terms of quantity, quality, and present and future associated risks and uncertainties. Models are identified, modified, and interfaced together to constitute an integrated water allocation optimization framework. This effort is a novel approach to confront the water allocation optimization problem while accounting for uncertainties associated with all its elements; thus resulting in a solution that correctly reflects the physical problem in hand.
Strategie de commande optimale de la production electrique dans un site isole
NASA Astrophysics Data System (ADS)
Barris, Nicolas
Hydro-Quebec manages more than 20 isolated power grids all over the province. The grids are located in small villages where the electricity demand is rather small. Those villages being far away from each other and from the main electricity production facilities, energy is produced locally using diesel generators. Electricity production costs at the isolated power grids are very important due to elevated diesel prices and transportation costs. However, the price of electricity is the same for the entire province, with no regards to the production costs of the electricity consumed. These two factors combined result in yearly exploitation losses for Hydro-Quebec. For any given village, several diesel generators are required to satisfy the demand. When the load increases, it becomes necessary to increase the capacity either by adding a generator to the production or by switching to a more powerful generator. The same thing happens when the load decreases. Every decision regarding changes in the production is included in the control strategy, which is based on predetermined parameters. These parameters were specified according to empirical studies and the knowledge base of the engineers managing the isolated power grids, but without any optimisation approach. The objective of the presented work is to minimize the diesel consumption by optimizing the parameters included in the control strategy. Its impact would be to limit the exploitation losses generated by the isolated power grids and the CO2 equivalent emissions without adding new equipment or completely changing the nature of the strategy. To satisfy this objective, the isolated power grid simulator OPERA is used along with the optimization library NOMAD and the data of three villages in northern Quebec. The preliminary optimization instance for the first village showed that some modifications to the existing control strategy must be done to better achieve the minimization objective. The main optimization processes consist of three different optimization approaches: the optimization of one set of parameters for all the villages, the optimization of one set of parameters per village, and the optimization of one set of parameters per diesel generator configuration per village. In the first scenario, the optimization of one set of parameters for all the villages leads to compromises for all three villages without allowing a full potential reduction for any village. Therefore, it is proven that applying one set of parameters to all the villages is not suitable for finding an optimal solution. In the second scenario, the optimization of one set of parameters per village allows an improvement over the previous results. At this point, it is shown that it is crucial to remove from the production the less efficient configurations when they are next to more efficient configurations. In the third scenario, the optimization of one set of parameters per configuration per village requires a very large number of function evaluations but does not result in any satisfying solution. In order to improve the performance of the optimization, it has been decided that the problem structure would be used. Two different approaches are considered: optimizing one set of parameters at a time and optimizing different rules included in the control strategy one at a time. In both cases, results are similar but calculation costs differ, the second method being much more cost efficient. The optimal values of the ultimate rules parameters can be directly linked to the efficient transition points that favor an efficient operation of the isolated power grids. Indeed, these transition points are defined in such a way that the high efficiency zone of every configuration is used. Therefore, it seems possible to directly identify on the graphs these optimal transition points and define the parameters in the control strategy without even having to run any optimization process. The diesel consumption reduction for all three villages is about 1.9%. Considering elevated diesel costs and the existence of about 20 other isolated power grids, the use of the developed methods together with a calibration of OPERA would allow a substantial reduction of Hydro-Quebec's annual deficit. Also, since one of the developed methods is very cost effective and produces equivalent results, it could be possible to use it during other processes; for example, when buying new equipment for the grid it could be possible to assess its full potential, under an optimized control strategy, and improve the net present value.
Experimental Design for Parameter Estimation of Gene Regulatory Networks
Timmer, Jens
2012-01-01
Systems biology aims for building quantitative models to address unresolved issues in molecular biology. In order to describe the behavior of biological cells adequately, gene regulatory networks (GRNs) are intensively investigated. As the validity of models built for GRNs depends crucially on the kinetic rates, various methods have been developed to estimate these parameters from experimental data. For this purpose, it is favorable to choose the experimental conditions yielding maximal information. However, existing experimental design principles often rely on unfulfilled mathematical assumptions or become computationally demanding with growing model complexity. To solve this problem, we combined advanced methods for parameter and uncertainty estimation with experimental design considerations. As a showcase, we optimized three simulated GRNs in one of the challenges from the Dialogue for Reverse Engineering Assessment and Methods (DREAM). This article presents our approach, which was awarded the best performing procedure at the DREAM6 Estimation of Model Parameters challenge. For fast and reliable parameter estimation, local deterministic optimization of the likelihood was applied. We analyzed identifiability and precision of the estimates by calculating the profile likelihood. Furthermore, the profiles provided a way to uncover a selection of most informative experiments, from which the optimal one was chosen using additional criteria at every step of the design process. In conclusion, we provide a strategy for optimal experimental design and show its successful application on three highly nonlinear dynamic models. Although presented in the context of the GRNs to be inferred for the DREAM6 challenge, the approach is generic and applicable to most types of quantitative models in systems biology and other disciplines. PMID:22815723
NASA Astrophysics Data System (ADS)
Srivastava, Y.; Srivastava, S.; Boriwal, L.
2016-09-01
Mechanical alloying is a novelistic solid state process that has received considerable attention due to many advantages over other conventional processes. In the present work, Co2FeAl healer alloy powder, prepared successfully from premix basic powders of Cobalt (Co), Iron (Fe) and Aluminum (Al) in stoichiometric of 60Co-26Fe-14Al (weight %) by novelistic mechano-chemical route. Magnetic properties of mechanically alloyed powders were characterized by vibrating sample magnetometer (VSM). 2 factor 5 level design matrix was applied to experiment process. Experimental results were used for response surface methodology. Interaction between the input process parameters and the response has been established with the help of regression analysis. Further analysis of variance technique was applied to check the adequacy of developed model and significance of process parameters. Test case study was performed with those parameters, which was not selected for main experimentation but range was same. Response surface methodology, the process parameters must be optimized to obtain improved magnetic properties. Further optimum process parameters were identified using numerical and graphical optimization techniques.
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
NASA Astrophysics Data System (ADS)
Rosenberg, David E.
2015-04-01
State-of-the-art systems analysis techniques focus on efficiently finding optimal solutions. Yet an optimal solution is optimal only for the modeled issues and managers often seek near-optimal alternatives that address unmodeled objectives, preferences, limits, uncertainties, and other issues. Early on, Modeling to Generate Alternatives (MGA) formalized near-optimal as performance within a tolerable deviation from the optimal objective function value and identified a few maximally different alternatives that addressed some unmodeled issues. This paper presents new stratified, Monte-Carlo Markov Chain sampling and parallel coordinate plotting tools that generate and communicate the structure and extent of the near-optimal region to an optimization problem. Interactive plot controls allow users to explore region features of most interest. Controls also streamline the process to elicit unmodeled issues and update the model formulation in response to elicited issues. Use for an example, single-objective, linear water quality management problem at Echo Reservoir, Utah, identifies numerous and flexible practices to reduce the phosphorus load to the reservoir and maintain close-to-optimal performance. Flexibility is upheld by further interactive alternative generation, transforming the formulation into a multiobjective problem, and relaxing the tolerance parameter to expand the near-optimal region. Compared to MGA, the new blended tools generate more numerous alternatives faster, more fully show the near-optimal region, and help elicit a larger set of unmodeled issues.
Si, Lei; Wang, Zhongbin; Liu, Xinhua; Tan, Chao; Liu, Ze; Xu, Jing
2016-01-01
Shearers play an important role in fully mechanized coal mining face and accurately identifying their cutting pattern is very helpful for improving the automation level of shearers and ensuring the safety of coal mining. The least squares support vector machine (LSSVM) has been proven to offer strong potential in prediction and classification issues, particularly by employing an appropriate meta-heuristic algorithm to determine the values of its two parameters. However, these meta-heuristic algorithms have the drawbacks of being hard to understand and reaching the global optimal solution slowly. In this paper, an improved fly optimization algorithm (IFOA) to optimize the parameters of LSSVM was presented and the LSSVM coupled with IFOA (IFOA-LSSVM) was used to identify the shearer cutting pattern. The vibration acceleration signals of five cutting patterns were collected and the special state features were extracted based on the ensemble empirical mode decomposition (EEMD) and the kernel function. Some examples on the IFOA-LSSVM model were further presented and the results were compared with LSSVM, PSO-LSSVM, GA-LSSVM and FOA-LSSVM models in detail. The comparison results indicate that the proposed approach was feasible, efficient and outperformed the others. Finally, an industrial application example at the coal mining face was demonstrated to specify the effect of the proposed system. PMID:26771615
Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...
2014-12-31
Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less
NASA Astrophysics Data System (ADS)
Göll, S.; Samsun, R. C.; Peters, R.
Fuel-cell-based auxiliary power units can help to reduce fuel consumption and emissions in transportation. For this application, the combination of solid oxide fuel cells (SOFCs) with upstream fuel processing by autothermal reforming (ATR) is seen as a highly favorable configuration. Notwithstanding the necessity to improve each single component, an optimized architecture of the fuel cell system as a whole must be achieved. To enable model-based analyses, a system-level approach is proposed in which the fuel cell system is modeled as a multi-stage thermo-chemical process using the "flowsheeting" environment PRO/II™. Therein, the SOFC stack and the ATR are characterized entirely by corresponding thermodynamic processes together with global performance parameters. The developed model is then used to achieve an optimal system layout by comparing different system architectures. A system with anode and cathode off-gas recycling was identified to have the highest electric system efficiency. Taking this system as a basis, the potential for further performance enhancement was evaluated by varying four parameters characterizing different system components. Using methods from the design and analysis of experiments, the effects of these parameters and of their interactions were quantified, leading to an overall optimized system with encouraging performance data.
NASA Astrophysics Data System (ADS)
Wang, Geng; Zhou, Kexin; Zhang, Yeming
2018-04-01
The widely used Bouc-Wen hysteresis model can be utilized to accurately simulate the voltage-displacement curves of piezoelectric actuators. In order to identify the unknown parameters of the Bouc-Wen model, an improved artificial bee colony (IABC) algorithm is proposed in this paper. A guiding strategy for searching the current optimal position of the food source is proposed in the method, which can help balance the local search ability and global exploitation capability. And the formula for the scout bees to search for the food source is modified to increase the convergence speed. Some experiments were conducted to verify the effectiveness of the IABC algorithm. The results show that the identified hysteresis model agreed well with the actual actuator response. Moreover, the identification results were compared with the standard particle swarm optimization (PSO) method, and it can be seen that the search performance in convergence rate of the IABC algorithm is better than that of the standard PSO method.
Feature selection with harmony search.
Diao, Ren; Shen, Qiang
2012-12-01
Many search strategies have been exploited for the task of feature selection (FS), in an effort to identify more compact and better quality subsets. Such work typically involves the use of greedy hill climbing (HC), or nature-inspired heuristics, in order to discover the optimal solution without going through exhaustive search. In this paper, a novel FS approach based on harmony search (HS) is presented. It is a general approach that can be used in conjunction with many subset evaluation techniques. The simplicity of HS is exploited to reduce the overall complexity of the search process. The proposed approach is able to escape from local solutions and identify multiple solutions owing to the stochastic nature of HS. Additional parameter control schemes are introduced to reduce the effort and impact of parameter configuration. These can be further combined with the iterative refinement strategy, tailored to enforce the discovery of quality subsets. The resulting approach is compared with those that rely on HC, genetic algorithms, and particle swarm optimization, accompanied by in-depth studies of the suggested improvements.
Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bekele, E. G.; Nicklow, J. W.
2005-12-01
Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.
NASA Astrophysics Data System (ADS)
Metzger, Philip T.; Lane, John E.; Carilli, Robert A.; Long, Jason M.; Shawn, Kathy L.
2010-07-01
A method combining photogrammetry with ballistic analysis is demonstrated to identify flying debris in a rocket launch environment. Debris traveling near the STS-124 Space Shuttle was captured on cameras viewing the launch pad within the first few seconds after launch. One particular piece of debris caught the attention of investigators studying the release of flame trench fire bricks because its high trajectory could indicate a flight risk to the Space Shuttle. Digitized images from two pad perimeter high-speed 16-mm film cameras were processed using photogrammetry software based on a multi-parameter optimization technique. Reference points in the image were found from 3D CAD models of the launch pad and from surveyed points on the pad. The three-dimensional reference points were matched to the equivalent two-dimensional camera projections by optimizing the camera model parameters using a gradient search optimization technique. Using this method of solving the triangulation problem, the xyz position of the object's path relative to the reference point coordinate system was found for every set of synchronized images. This trajectory was then compared to a predicted trajectory while performing regression analysis on the ballistic coefficient and other parameters. This identified, with a high degree of confidence, the object's material density and thus its probable origin within the launch pad environment. Future extensions of this methodology may make it possible to diagnose the underlying causes of debris-releasing events in near-real time, thus improving flight safety.
Structural Identifiability of Dynamic Systems Biology Models
Villaverde, Alejandro F.
2016-01-01
A powerful way of gaining insight into biological systems is by creating a nonlinear differential equation model, which usually contains many unknown parameters. Such a model is called structurally identifiable if it is possible to determine the values of its parameters from measurements of the model outputs. Structural identifiability is a prerequisite for parameter estimation, and should be assessed before exploiting a model. However, this analysis is seldom performed due to the high computational cost involved in the necessary symbolic calculations, which quickly becomes prohibitive as the problem size increases. In this paper we show how to analyse the structural identifiability of a very general class of nonlinear models by extending methods originally developed for studying observability. We present results about models whose identifiability had not been previously determined, report unidentifiabilities that had not been found before, and show how to modify those unidentifiable models to make them identifiable. This method helps prevent problems caused by lack of identifiability analysis, which can compromise the success of tasks such as experiment design, parameter estimation, and model-based optimization. The procedure is called STRIKE-GOLDD (STRuctural Identifiability taKen as Extended-Generalized Observability with Lie Derivatives and Decomposition), and it is implemented in a MATLAB toolbox which is available as open source software. The broad applicability of this approach facilitates the analysis of the increasingly complex models used in systems biology and other areas. PMID:27792726
Comparative Sensitivity Analysis of Muscle Activation Dynamics
Günther, Michael; Götz, Thomas
2015-01-01
We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379
Ray, Chad A; Patel, Vimal; Shih, Judy; Macaraeg, Chris; Wu, Yuling; Thway, Theingi; Ma, Mark; Lee, Jean W; Desilva, Binodh
2009-02-20
Developing a process that generates robust immunoassays that can be used to support studies with tight timelines is a common challenge for bioanalytical laboratories. Design of experiments (DOEs) is a tool that has been used by many industries for the purpose of optimizing processes. The approach is capable of identifying critical factors and their interactions with a minimal number of experiments. The challenge for implementing this tool in the bioanalytical laboratory is to develop a user-friendly approach that scientists can understand and apply. We have successfully addressed these challenges by eliminating the screening design, introducing automation, and applying a simple mathematical approach for the output parameter. A modified central composite design (CCD) was applied to three ligand binding assays. The intra-plate factors selected were coating, detection antibody concentration, and streptavidin-HRP concentrations. The inter-plate factors included incubation times for each step. The objective was to maximize the logS/B (S/B) of the low standard to the blank. The maximum desirable conditions were determined using JMP 7.0. To verify the validity of the predictions, the logS/B prediction was compared against the observed logS/B during pre-study validation experiments. The three assays were optimized using the multi-factorial DOE. The total error for all three methods was less than 20% which indicated method robustness. DOE identified interactions in one of the methods. The model predictions for logS/B were within 25% of the observed pre-study validation values for all methods tested. The comparison between the CCD and hybrid screening design yielded comparable parameter estimates. The user-friendly design enables effective application of multi-factorial DOE to optimize ligand binding assays for therapeutic proteins. The approach allows for identification of interactions between factors, consistency in optimal parameter determination, and reduced method development time.
Gahlawat, Geeta; Srivastava, Ashok K
2012-11-01
Polyhydroxybutyrate or PHB is a biodegradable and biocompatible thermoplastic with many interesting applications in medicine, food packaging, and tissue engineering materials. The present study deals with the enhanced production of PHB by Azohydromonas australica using sucrose and the estimation of fundamental kinetic parameters of PHB fermentation process. The preliminary culture growth inhibition studies were followed by statistical optimization of medium recipe using response surface methodology to increase the PHB production. Later on batch cultivation in a 7-L bioreactor was attempted using optimum concentration of medium components (process variables) obtained from statistical design to identify the batch growth and product kinetics parameters of PHB fermentation. A. australica exhibited a maximum biomass and PHB concentration of 8.71 and 6.24 g/L, respectively in bioreactor with an overall PHB production rate of 0.75 g/h. Bioreactor cultivation studies demonstrated that the specific biomass and PHB yield on sucrose was 0.37 and 0.29 g/g, respectively. The kinetic parameters obtained in the present investigation would be used in the development of a batch kinetic mathematical model for PHB production which will serve as launching pad for further process optimization studies, e.g., design of several bioreactor cultivation strategies to further enhance the biopolymer production.
Preliminary structural design of a lunar transfer vehicle aerobrake. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bush, Lance B.
1992-01-01
An aerobrake concept for a Lunar transfer vehicle was weight optimized through the use of the Taguchi design method, structural finite element analyses and structural sizing routines. Six design parameters were chosen to represent the aerobrake structural configuration. The design parameters included honeycomb core thickness, diameter to depth ratio, shape, material, number of concentric ring frames, and number of radial frames. Each parameter was assigned three levels. The minimum weight aerobrake configuration resulting from the study was approx. half the weight of the average of all twenty seven experimental configurations. The parameters having the most significant impact on the aerobrake structural weight were identified.
Analysis of automated quantification of motor activity in REM sleep behaviour disorder.
Frandsen, Rune; Nikolic, Miki; Zoetmulder, Marielle; Kempfner, Lykke; Jennum, Poul
2015-10-01
Rapid eye movement (REM) sleep behaviour disorder (RBD) is characterized by dream enactment and REM sleep without atonia. Atonia is evaluated on the basis of visual criteria, but there is a need for more objective, quantitative measurements. We aimed to define and optimize a method for establishing baseline and all other parameters in automatic quantifying submental motor activity during REM sleep. We analysed the electromyographic activity of the submental muscle in polysomnographs of 29 patients with idiopathic RBD (iRBD), 29 controls and 43 Parkinson's (PD) patients. Six adjustable parameters for motor activity were defined. Motor activity was detected and quantified automatically. The optimal parameters for separating RBD patients from controls were investigated by identifying the greatest area under the receiver operating curve from a total of 648 possible combinations. The optimal parameters were validated on PD patients. Automatic baseline estimation improved characterization of atonia during REM sleep, as it eliminates inter/intra-observer variability and can be standardized across diagnostic centres. We found an optimized method for quantifying motor activity during REM sleep. The method was stable and can be used to differentiate RBD from controls and to quantify motor activity during REM sleep in patients with neurodegeneration. No control had more than 30% of REM sleep with increased motor activity; patients with known RBD had as low activity as 4.5%. We developed and applied a sensitive, quantitative, automatic algorithm to evaluate loss of atonia in RBD patients. © 2015 European Sleep Research Society.
Wang, Zimeng; Meenach, Samantha A
2017-12-01
Nanocomposite microparticle (nCmP) systems exhibit promising potential in the application of therapeutics for pulmonary drug delivery. This work aimed at identifying the optimal spray-drying condition(s) to prepare nCmP with specific drug delivery properties including small aerodynamic diameter, effective nanoparticle (NP) redispersion upon nCmP exposure to an aqueous solution, high drug loading, and low water content. Acetalated dextran (Ac-Dex) was used to form NPs, curcumin was used as a model drug, and mannitol was the excipient in the nCmP formulation. Box-Behnken design was applied using Design-Expert software for nCmP parameter optimization. NP ratio (NP%) and feed concentration (Fc) are significant parameters that affect the aerodynamic diameters of nCmP systems. NP% is also a significant parameter that affects the drug loading. Fc is the only parameter that influenced the water content of the particles significantly. All nCmP systems could be completely redispersed into the parent NPs, indicating that none of the factors have an influence on this property within the design range. The optimal spray-drying condition to prepare nCmP with a small aerodynamic diameter, redispersion of the NPs, low water content, and high drug loading is 80% NP%, 0.5% Fc, and an inlet temperature lower than 130°C. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
Optimal lunar soft landing trajectories using taboo evolutionary programming
NASA Astrophysics Data System (ADS)
Mutyalarao, M.; Raj, M. Xavier James
A safe lunar landing is a key factor to undertake an effective lunar exploration. Lunar lander consists of four phases such as launch phase, the earth-moon transfer phase, circumlunar phase and landing phase. The landing phase can be either hard landing or soft landing. Hard landing means the vehicle lands under the influence of gravity without any deceleration measures. However, soft landing reduces the vertical velocity of the vehicle before landing. Therefore, for the safety of the astronauts as well as the vehicle lunar soft landing with an acceptable velocity is very much essential. So it is important to design the optimal lunar soft landing trajectory with minimum fuel consumption. Optimization of Lunar Soft landing is a complex optimal control problem. In this paper, an analysis related to lunar soft landing from a parking orbit around Moon has been carried out. A two-dimensional trajectory optimization problem is attempted. The problem is complex due to the presence of system constraints. To solve the time-history of control parameters, the problem is converted into two point boundary value problem by using the maximum principle of Pontrygen. Taboo Evolutionary Programming (TEP) technique is a stochastic method developed in recent years and successfully implemented in several fields of research. It combines the features of taboo search and single-point mutation evolutionary programming. Identifying the best unknown parameters of the problem under consideration is the central idea for many space trajectory optimization problems. The TEP technique is used in the present methodology for the best estimation of initial unknown parameters by minimizing objective function interms of fuel requirements. The optimal estimation subsequently results into an optimal trajectory design of a module for soft landing on the Moon from a lunar parking orbit. Numerical simulations demonstrate that the proposed approach is highly efficient and it reduces the minimum fuel consumption. The results are compared with the available results in literature shows that the solution of present algorithm is better than some of the existing algorithms. Keywords: soft landing, trajectory optimization, evolutionary programming, control parameters, Pontrygen principle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mestrovic, Ante; Clark, Brenda G.; Department of Medical Physics, British Columbia Cancer Agency, Vancouver, British Columbia
2005-11-01
Purpose: To develop a method of predicting the values of dose distribution parameters of different radiosurgery techniques for treatment of arteriovenous malformation (AVM) based on internal geometric parameters. Methods and Materials: For each of 18 previously treated AVM patients, four treatment plans were created: circular collimator arcs, dynamic conformal arcs, fixed conformal fields, and intensity-modulated radiosurgery. An algorithm was developed to characterize the target and critical structure shape complexity and the position of the critical structures with respect to the target. Multiple regression was employed to establish the correlation between the internal geometric parameters and the dose distribution for differentmore » treatment techniques. The results from the model were applied to predict the dosimetric outcomes of different radiosurgery techniques and select the optimal radiosurgery technique for a number of AVM patients. Results: Several internal geometric parameters showing statistically significant correlation (p < 0.05) with the treatment planning results for each technique were identified. The target volume and the average minimum distance between the target and the critical structures were the most effective predictors for normal tissue dose distribution. The structure overlap volume with the target and the mean distance between the target and the critical structure were the most effective predictors for critical structure dose distribution. The predicted values of dose distribution parameters of different radiosurgery techniques were in close agreement with the original data. Conclusions: A statistical model has been described that successfully predicts the values of dose distribution parameters of different radiosurgery techniques and may be used to predetermine the optimal technique on a patient-to-patient basis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Liwei; Qian, Yun; Zhou, Tianjun
2014-10-01
In this study, we calibrated the performance of regional climate model RegCM3 with Massachusetts Institute of Technology (MIT)-Emanuel cumulus parameterization scheme over CORDEX East Asia domain by tuning the selected seven parameters through multiple very fast simulated annealing (MVFSA) sampling method. The seven parameters were selected based on previous studies, which customized the RegCM3 with MIT-Emanuel scheme through three different ways by using the sensitivity experiments. The responses of model results to the seven parameters were investigated. Since the monthly total rainfall is constrained, the simulated spatial pattern of rainfall and the probability density function (PDF) distribution of daily rainfallmore » rates are significantly improved in the optimal simulation. Sensitivity analysis suggest that the parameter “relative humidity criteria” (RH), which has not been considered in the default simulation, has the largest effect on the model results. The responses of total rainfall over different regions to RH were examined. Positive responses of total rainfall to RH are found over northern equatorial western Pacific, which are contributed by the positive responses of explicit rainfall. Followed by an increase of RH, the increases of the low-level convergence and the associated increases in cloud water favor the increase of the explicit rainfall. The identified optimal parameters constrained by the total rainfall have positive effects on the low-level circulation and the surface air temperature. Furthermore, the optimized parameters based on the extreme case are suitable for a normal case and the model’s new version with mixed convection scheme.« less
Optimizing Decision Support for Tailored Health Behavior Change Applications.
Kukafka, Rita; Jeong, In cheol; Finkelstein, Joseph
2015-01-01
The Tailored Lifestyle Change Decision Aid (TLC DA) system was designed to provide support for a person to make an informed choice about which behavior change to work on when multiple unhealthy behaviors are present. TLC DA can be delivered via web, smartphones and tablets. The system collects a significant amount of information that is used to generate tailored messages to consumers to persuade them in certain healthy lifestyles. One limitation is the necessity to collect vast amounts of information from users who manually enter. By identifying an optimal set of self-reported parameters we will be able to minimize the data entry burden of the app users. The study was to identify primary determinants of health behavior choices made by patients after using the system. Using discriminant analysis an optimal set of predictors was identified. The resulting set included smoking status, smoking cessation success estimate, self-efficacy, body mass index and diet status. Predicting smoking cessation choice was the most accurate, followed by weight management. Physical activity and diet choices were better identified in a combined cluster.
Dynamic Portfolio Strategy Using Clustering Approach
Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian
2017-01-01
The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market. PMID:28129333
NASA Astrophysics Data System (ADS)
Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.
2017-07-01
A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.
Dynamic Portfolio Strategy Using Clustering Approach.
Ren, Fei; Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian
2017-01-01
The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market.
Mathematical modeling of a Ti:sapphire solid-state laser
NASA Technical Reports Server (NTRS)
Swetits, John J.
1987-01-01
The project initiated a study of a mathematical model of a tunable Ti:sapphire solid-state laser. A general mathematical model was developed for the purpose of identifying design parameters which will optimize the system, and serve as a useful predictor of the system's behavior.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
NASA Astrophysics Data System (ADS)
Ding, Liang; Gao, Haibo; Liu, Zhen; Deng, Zongquan; Liu, Guangjun
2015-12-01
Identifying the mechanical property parameters of planetary soil based on terramechanics models using in-situ data obtained from autonomous planetary exploration rovers is both an important scientific goal and essential for control strategy optimization and high-fidelity simulations of rovers. However, identifying all the terrain parameters is a challenging task because of the nonlinear and coupling nature of the involved functions. Three parameter identification methods are presented in this paper to serve different purposes based on an improved terramechanics model that takes into account the effects of slip, wheel lugs, etc. Parameter sensitivity and coupling of the equations are analyzed, and the parameters are grouped according to their sensitivity to the normal force, resistance moment and drawbar pull. An iterative identification method using the original integral model is developed first. In order to realize real-time identification, the model is then simplified by linearizing the normal and shearing stresses to derive decoupled closed-form analytical equations. Each equation contains one or two groups of soil parameters, making step-by-step identification of all the unknowns feasible. Experiments were performed using six different types of single-wheels as well as a four-wheeled rover moving on planetary soil simulant. All the unknown model parameters were identified using the measured data and compared with the values obtained by conventional experiments. It is verified that the proposed iterative identification method provides improved accuracy, making it suitable for scientific studies of soil properties, whereas the step-by-step identification methods based on simplified models require less calculation time, making them more suitable for real-time applications. The models have less than 10% margin of error comparing with the measured results when predicting the interaction forces and moments using the corresponding identified parameters.
Optimizing RF gun cavity geometry within an automated injector design system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alicia Hofler ,Pavel Evtushenko
2011-03-28
RF guns play an integral role in the success of several light sources around the world, and properly designed and optimized cw superconducting RF (SRF) guns can provide a path to higher average brightness. As the need for these guns grows, it is important to have automated optimization software tools that vary the geometry of the gun cavity as part of the injector design process. This will allow designers to improve existing designs for present installations, extend the utility of these guns to other applications, and develop new designs. An evolutionary algorithm (EA) based system can provide this capability becausemore » EAs can search in parallel a large parameter space (often non-linear) and in a relatively short time identify promising regions of the space for more careful consideration. The injector designer can then evaluate more cavity design parameters during the injector optimization process against the beam performance requirements of the injector. This paper will describe an extension to the APISA software that allows the cavity geometry to be modified as part of the injector optimization and provide examples of its application to existing RF and SRF gun designs.« less
Optimization of a chemical identification algorithm
NASA Astrophysics Data System (ADS)
Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren
2010-04-01
A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be applied. The limitations of applying this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.
Hydrologic Process-oriented Optimization of Electrical Resistivity Tomography
NASA Astrophysics Data System (ADS)
Hinnell, A.; Bechtold, M.; Ferre, T. A.; van der Kruk, J.
2010-12-01
Electrical resistivity tomography (ERT) is commonly used in hydrologic investigations. Advances in joint and coupled hydrogeophysical inversion have enhanced the quantitative use of ERT to construct and condition hydrologic models (i.e. identify hydrologic structure and estimate hydrologic parameters). However the selection of which electrical resistivity data to collect and use is often determined by a combination of data requirements for geophysical analysis, intuition on the part of the hydrogeophysicist and logistical constraints of the laboratory or field site. One of the advantages of coupled hydrogeophysical inversion is the direct link between the hydrologic model and the individual geophysical data used to condition the model. That is, there is no requirement to collect geophysical data suitable for independent geophysical inversion. The geophysical measurements collected can be optimized for estimation of hydrologic model parameters rather than to develop a geophysical model. Using a synthetic model of drip irrigation we evaluate the value of individual resistivity measurements to describe the soil hydraulic properties and then use this information to build a data set optimized for characterizing hydrologic processes. We then compare the information content in the optimized data set with the information content in a data set optimized using a Jacobian sensitivity analysis.
NASA Astrophysics Data System (ADS)
Feng, Maoyuan; Liu, Pan; Guo, Shenglian; Shi, Liangsheng; Deng, Chao; Ming, Bo
2017-08-01
Operating rules have been used widely to decide reservoir operations because of their capacity for coping with uncertain inflow. However, stationary operating rules lack adaptability; thus, under changing environmental conditions, they cause inefficient reservoir operation. This paper derives adaptive operating rules based on time-varying parameters generated using the ensemble Kalman filter (EnKF). A deterministic optimization model is established to obtain optimal water releases, which are further taken as observations of the reservoir simulation model. The EnKF is formulated to update the operating rules sequentially, providing a series of time-varying parameters. To identify the index that dominates the variations of the operating rules, three hydrologic factors are selected: the reservoir inflow, ratio of future inflow to current available water, and available water. Finally, adaptive operating rules are derived by fitting the time-varying parameters with the identified dominant hydrologic factor. China's Three Gorges Reservoir was selected as a case study. Results show that (1) the EnKF has the capability of capturing the variations of the operating rules, (2) reservoir inflow is the factor that dominates the variations of the operating rules, and (3) the derived adaptive operating rules are effective in improving hydropower benefits compared with stationary operating rules. The insightful findings of this study could be used to help adapt reservoir operations to mitigate the effects of changing environmental conditions.
Assessing cost-effectiveness of specific LID practice designs in response to large storm events
NASA Astrophysics Data System (ADS)
Chui, Ting Fong May; Liu, Xin; Zhan, Wenting
2016-02-01
Low impact development (LID) practices have become more important in urban stormwater management worldwide. However, most research on design optimization focuses on relatively large scale, and there is very limited information or guideline regarding individual LID practice designs (i.e., optimal depth, width and length). The objective of this study is to identify the optimal design by assessing the hydrological performance and the cost-effectiveness of different designs of LID practices at a household or business scale, and to analyze the sensitivity of the hydrological performance and the cost of the optimal design to different model and design parameters. First, EPA SWMM, automatically controlled by MATLAB, is used to obtain the peak runoff of different designs of three specific LID practices (i.e., green roof, bioretention and porous pavement) under different design storms (i.e., 2 yr and 50 yr design storms of Hong Kong, China and Seattle, U.S.). Then, life cycle cost is estimated for the different designs, and the optimal design, defined as the design with the lowest cost and at least 20% peak runoff reduction, is identified. Finally, sensitivity of the optimal design to the different design parameters is examined. The optimal design of green roof tends to be larger in area but thinner, while the optimal designs of bioretention and porous pavement tend to be smaller in area. To handle larger storms, however, it is more effective to increase the green roof depth, and to increase the area of the bioretention and porous pavement. Porous pavement is the most cost-effective for peak flow reduction, followed by bioretention and then green roof. The cost-effectiveness, measured as the peak runoff reduction/thousand Dollars of LID practices in Hong Kong (e.g., 0.02 L/103 US s, 0.15 L/103 US s and 0.93 L/103 US s for green roof, bioretention and porous pavement for 2 yr storm) is lower than that in Seattle (e.g., 0.03 L/103 US s, 0.29 L/103 US s and 1.58 L/103 US s for green roof, bioretention and porous pavement for 2 yr storm). The optimal designs are influenced by the model and design parameters (i.e., initial saturation, hydraulic conductivity and berm height). However, it overall does not affect the main trends and key insights derived, and the results are therefore generic and relevant to the household/business-scale optimal design of LID practices worldwide.
glmnetLRC f/k/a lrc package: Logistic Regression Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-06-09
Methods for fitting and predicting logistic regression classifiers (LRC) with an arbitrary loss function using elastic net or best subsets. This package adds additional model fitting features to the existing glmnet and bestglm R packages. This package was created to perform the analyses described in Amidan BG, Orton DJ, LaMarche BL, et al. 2014. Signatures for Mass Spectrometry Data Quality. Journal of Proteome Research. 13(4), 2215-2222. It makes the model fitting available in the glmnet and bestglm packages more general by identifying optimal model parameters via cross validation with an customizable loss function. It also identifies the optimal threshold formore » binary classification.« less
Selection of sampling rate for digital control of aircrafts
NASA Technical Reports Server (NTRS)
Katz, P.; Powell, J. D.
1974-01-01
The considerations in selecting the sample rates for digital control of aircrafts are identified and evaluated using the optimal discrete method. A high performance aircraft model which includes a bending mode and wind gusts was studied. The following factors which influence the selection of the sampling rates were identified: (1) the time and roughness response to control inputs; (2) the response to external disturbances; and (3) the sensitivity to variations of parameters. It was found that the time response to a control input and the response to external disturbances limit the selection of the sampling rate. The optimal discrete regulator, the steady state Kalman filter, and the mean response to external disturbances are calculated.
Optimization of the Number and Location of Tsunami Stations in a Tsunami Warning System
NASA Astrophysics Data System (ADS)
An, C.; Liu, P. L. F.; Pritchard, M. E.
2014-12-01
Optimizing the number and location of tsunami stations in designing a tsunami warning system is an important and practical problem. It is always desirable to maximize the capability of the data obtained from the stations for constraining the earthquake source parameters, and to minimize the number of stations at the same time. During the 2011 Tohoku tsunami event, 28 coastal gauges and DART buoys in the near-field recorded tsunami waves, providing an opportunity for assessing the effectiveness of those stations in identifying the earthquake source parameters. Assuming a single-plane fault geometry, inversions of tsunami data from combinations of various number (1~28) of stations and locations are conducted and evaluated their effectiveness according to the residues of the inverse method. Results show that the optimized locations of stations depend on the number of stations used. If the stations are optimally located, 2~4 stations are sufficient to constrain the source parameters. Regarding the optimized location, stations must be uniformly spread in all directions, which is not surprising. It is also found that stations within the source region generally give worse constraint of earthquake source than stations farther from source, which is due to the exaggeration of model error in matching large amplitude waves at near-source stations. Quantitative discussions on these findings will be given in the presentation. Applying similar analysis to the Manila Trench based on artificial scenarios of earthquakes and tsunamis, the optimal location of tsunami stations are obtained, which provides guidance of deploying a tsunami warning system in this region.
Gulati, Abhishek; Faed, James M; Isbister, Geoffrey K; Duffull, Stephen B
2015-10-01
Dosing of enoxaparin, like other anticoagulants, may result in bleeding following excessive doses and clot formation if the dose is too low. We recently showed that a factor Xa based clotting time test could potentially assess the effect of enoxaparin on the clotting system. However, the test did not perform well in subsequent individuals and effectiveness of an exogenous phospholipid, Actin FS, in reducing the variability in the clotting time was assessed. The aim of this work was to conduct an adaptive pilot study to determine the range of concentrations of Xa and Actin FS to take forward into a proof-of-concept study. A nonlinear parametric function was developed to describe the response surface over the factors of interest. An adaptive method was used to estimate the parameters using a D-optimal design criterion. In order to provide a reasonable probability of observing a success of the clotting time test, a P-optimal design criterion was incorporated using a loss function to describe the hybrid DP-optimality. The use of adaptive DP-optimality method resulted in an efficient estimation of model parameters using data from only 6 healthy volunteers. The use of response surface modelling identified a range of sets of Xa and Actin FS concentrations, any of which could be used for the proof-of-concept study. This study shows that parsimonious adaptive DP-optimal designs may provide both precise parameter estimates for response surface modelling as well as clinical confidence in the potential benefits of the study.
Parameter identification and optimization of slide guide joint of CNC machine tools
NASA Astrophysics Data System (ADS)
Zhou, S.; Sun, B. B.
2017-11-01
The joint surface has an important influence on the performance of CNC machine tools. In order to identify the dynamic parameters of slide guide joint, the parametric finite element model of the joint is established and optimum design method is used based on the finite element simulation and modal test. Then the mode that has the most influence on the dynamics of slip joint is found through harmonic response analysis. Take the frequency of this mode as objective, the sensitivity analysis of the stiffness of each joint surface is carried out using Latin Hypercube Sampling and Monte Carlo Simulation. The result shows that the vertical stiffness of slip joint surface constituted by the bed and the slide plate has the most obvious influence on the structure. Therefore, this stiffness is taken as the optimization variable and the optimal value is obtained through studying the relationship between structural dynamic performance and stiffness. Take the stiffness values before and after optimization into the FEM of machine tool, and it is found that the dynamic performance of the machine tool is improved.
Adaptive convex combination approach for the identification of improper quaternion processes.
Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P
2014-01-01
Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics).
Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E
2015-09-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.
Chen, Yu-Cheng; Tsai, Perng-Jy; Mou, Jin-Luh
2008-07-15
This study is the first one using the Taguchi experimental design to identify the optimal operating condition for reducing polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/ Fs) formations during the iron ore sintering process. Four operating parameters, including the water content (Wc; range = 6.0-7.0 wt %), suction pressure (Ps; range = 1000-1400 mmH2O), bed height (Hb; range = 500-600 mm), and type of hearth layer (including sinter, hematite, and limonite), were selected for conducting experiments in a pilot scale sinter pot to simulate various sintering operating conditions of a real-scale sinter plant We found that the resultant optimal combination (Wc = 6.5 wt%, Hb = 500 mm, Ps = 1000 mmH2O, and hearth layer = hematite) could decrease the emission factor of total PCDD/Fs (total EF(PCDD/Fs)) up to 62.8% by reference to the current operating condition of the real-scale sinter plant (Wc = 6.5 wt %, Hb = 550 mm, Ps = 1200 mmH2O, and hearth layer = sinter). Through the ANOVA analysis, we found that Wc was the most significant parameter in determining total EF(PCDD/Fs (accounting for 74.7% of the total contribution of the four selected parameters). The resultant optimal combination could also enhance slightly in both sinter productivity and sinter strength (30.3 t/m2/day and 72.4%, respectively) by reference to those obtained from the reference operating condition (29.9 t/m (2)/day and 72.2%, respectively). The above results further ensure the applicability of the obtained optimal combination for the real-scale sinter production without interfering its sinter productivity and sinter strength.
Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.
2015-01-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275
Büttner, Kathrin; Krieter, Joachim; Traulsen, Arne; Traulsen, Imke
2013-01-01
Centrality parameters in animal trade networks typically have right-skewed distributions, implying that these networks are highly resistant against the random removal of holdings, but vulnerable to the targeted removal of the most central holdings. In the present study, we analysed the structural changes of an animal trade network topology based on the targeted removal of holdings using specific centrality parameters in comparison to the random removal of holdings. Three different time periods were analysed: the three-year network, the yearly and the monthly networks. The aim of this study was to identify appropriate measures for the targeted removal, which lead to a rapid fragmentation of the network. Furthermore, the optimal combination of the removal of three holdings regardless of their centrality was identified. The results showed that centrality parameters based on ingoing trade contacts, e.g. in-degree, ingoing infection chain and ingoing closeness, were not suitable for a rapid fragmentation in all three time periods. More efficient was the removal based on parameters considering the outgoing trade contacts. In all networks, a maximum percentage of 7.0% (on average 5.2%) of the holdings had to be removed to reduce the size of the largest component by more than 75%. The smallest difference from the optimal combination for all three time periods was obtained by the removal based on out-degree with on average 1.4% removed holdings, followed by outgoing infection chain and outgoing closeness. The targeted removal using the betweenness centrality differed the most from the optimal combination in comparison to the other parameters which consider the outgoing trade contacts. Due to the pyramidal structure and the directed nature of the pork supply chain the most efficient interruption of the infection chain for all three time periods was obtained by using the targeted removal based on out-degree. PMID:24069293
NASA Astrophysics Data System (ADS)
Bhattacharjya, Rajib Kumar
2018-05-01
The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.
Mission planning for on-orbit servicing through multiple servicing satellites: A new approach
NASA Astrophysics Data System (ADS)
Daneshjou, K.; Mohammadi-Dehabadi, A. A.; Bakhtiari, M.
2017-09-01
In this paper, a novel approach is proposed for the mission planning of on-orbit servicing such as visual inspection, active debris removal and refueling through multiple servicing satellites (SSs). The scheduling has been done with the aim of minimization of fuel consumption and mission duration. So a multi-objective optimization problem is dealt with here which is solved by employing particle swarm optimization algorithm. Also, Taguchi technique is employed for robust design of effective parameters of optimization problem. The day that the SSs have to leave parking orbit, transfer duration from parking orbit to final orbit, transfer duration between one target to another, and time spent for the SS on each target are the decision parameters which are obtained from the optimization problem. The raised idea is that in addition to the aforementioned decision parameters, eccentricity and inclination related to the initial orbit and also phase difference between the SSs on the initial orbit are identified by means of optimization problem, so that the designer has not much role on determining them. Furthermore, it is considered that the SS and the target rendezvous at the servicing point and the SS does not perform any phasing maneuver to reach the target. It should be noted that Lambert theorem is used for determination of the transfer orbit. The results show that the proposed approach reduces the fuel consumption and the mission duration significantly in comparison with the conventional approaches.
NASA Technical Reports Server (NTRS)
Duffy, Kirsten P.
2016-01-01
NASA Glenn Research Center is investigating hybrid electric and turboelectric propulsion concepts for future aircraft to reduce fuel burn, emissions, and noise. Systems studies show that the weight and efficiency of the electric system components need to be improved for this concept to be feasible. This effort aims to identify design parameters that affect power density and efficiency for a double-Halbach array permanent-magnet ironless axial flux motor configuration. These parameters include both geometrical and higher-order parameters, including pole count, rotor speed, current density, and geometries of the magnets, windings, and air gap.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.
2013-12-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.
Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, B.
2013-01-01
A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.
Pervez, Hifsa; Mozumder, Mohammad S; Mourad, Abdel-Hamid I
2016-08-22
The current study presents an investigation on the optimization of injection molding parameters of HDPE/TiO₂ nanocomposites using grey relational analysis with the Taguchi method. Four control factors, including filler concentration (i.e., TiO₂), barrel temperature, residence time and holding time, were chosen at three different levels of each. Mechanical properties, such as yield strength, Young's modulus and elongation, were selected as the performance targets. Nine experimental runs were carried out based on the Taguchi L₉ orthogonal array, and the data were processed according to the grey relational steps. The optimal process parameters were found based on the average responses of the grey relational grades, and the ideal operating conditions were found to be a filler concentration of 5 wt % TiO₂, a barrel temperature of 225 °C, a residence time of 30 min and a holding time of 20 s. Moreover, analysis of variance (ANOVA) has also been applied to identify the most significant factor, and the percentage of TiO₂ nanoparticles was found to have the most significant effect on the properties of the HDPE/TiO₂ nanocomposites fabricated through the injection molding process.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Optimized parameter estimation in the presence of collective phase noise
NASA Astrophysics Data System (ADS)
Altenburg, Sanah; Wölk, Sabine; Tóth, Géza; Gühne, Otfried
2016-11-01
We investigate phase and frequency estimation with different measurement strategies under the effect of collective phase noise. First, we consider the standard linear estimation scheme and present an experimentally realizable optimization of the initial probe states by collective rotations. We identify the optimal rotation angle for different measurement times. Second, we show that subshot noise sensitivity—up to the Heisenberg limit—can be reached in presence of collective phase noise by using differential interferometry, where one part of the system is used to monitor the noise. For this, not only Greenberger-Horne-Zeilinger states but also symmetric Dicke states are suitable. We investigate the optimal splitting for a general symmetric Dicke state at both inputs and discuss possible experimental realizations of differential interferometry.
NASA Astrophysics Data System (ADS)
Hu, K. M.; Li, Hua
2018-07-01
A novel technique for the multi-parameter optimization of distributed piezoelectric actuators is presented in this paper. The proposed method is designed to improve the performance of multi-mode vibration control in cylindrical shells. The optimization parameters of actuator patch configuration include position, size, and tilt angle. The modal control force of tilted orthotropic piezoelectric actuators is derived and the multi-parameter cylindrical shell optimization model is established. The linear quadratic energy index is employed as the optimization criterion. A geometric constraint is proposed to prevent overlap between tilted actuators, which is plugged into a genetic algorithm to search the optimal configuration parameters. A simply-supported closed cylindrical shell with two actuators serves as a case study. The vibration control efficiencies of various parameter sets are evaluated via frequency response and transient response simulations. The results show that the linear quadratic energy indexes of position and size optimization decreased by 14.0% compared to position optimization; those of position and tilt angle optimization decreased by 16.8%; and those of position, size, and tilt angle optimization decreased by 25.9%. It indicates that, adding configuration optimization parameters is an efficient approach to improving the vibration control performance of piezoelectric actuators on shells.
Trade Services System Adaptation for Sustainable Development
NASA Astrophysics Data System (ADS)
Khrichenkov, A.; Shaufler, V.; Bannikova, L.
2017-11-01
Under market conditions, the trade services system in post-Soviet Russia, being one of the most important city infrastructures, loses its systematic and hierarchic consistency hence provoking the degradation of communicating transport systems and urban planning framework. This article describes the results of the research carried out to identify objects and object parameters that influence functioning of a locally significant trade services system. Based on the revealed consumer behaviour patterns, we propose methods to determine the optimal parameters of objects inside a locally significant trade services system.
Optimizing the Determination of Roughness Parameters for Model Urban Canopies
NASA Astrophysics Data System (ADS)
Huq, Pablo; Rahman, Auvi
2018-05-01
We present an objective optimization procedure to determine the roughness parameters for very rough boundary-layer flow over model urban canopies. For neutral stratification the mean velocity profile above a model urban canopy is described by the logarithmic law together with the set of roughness parameters of displacement height d, roughness length z_0 , and friction velocity u_* . Traditionally, values of these roughness parameters are obtained by fitting the logarithmic law through (all) the data points comprising the velocity profile. The new procedure generates unique velocity profiles from subsets or combinations of the data points of the original velocity profile, after which all possible profiles are examined. Each of the generated profiles is fitted to the logarithmic law for a sequence of values of d, with the representative value of d obtained from the minima of the summed least-squares errors for all the generated profiles. The representative values of z_0 and u_* are identified by the peak in the bivariate histogram of z_0 and u_* . The methodology has been verified against laboratory datasets of flow above model urban canopies.
Study of process parameter on mist lubrication of Titanium (Grade 5) alloy
NASA Astrophysics Data System (ADS)
Maity, Kalipada; Pradhan, Swastik
2017-02-01
This paper deals with the machinability of Ti-6Al-4V alloy with mist cooling lubrication using carbide inserts. The influence of process parameter on the cutting forces, evolution of tool wear, surface finish of the workpiece, material removal rate and chip reduction coefficient have been investigated. Weighted principal component analysis coupled with grey relational analysis optimization is applied to identify the optimum setting of the process parameter. Optimal condition of the process parameter was cutting speed at 160 m/min, feed at 0.16 mm/rev and depth of cut at 1.6 mm. Effects of cutting speed and depth of cut on the type of chips formation were observed. Most of the chips forms were long tubular and long helical type. Image analyses of the segmented chip were examined to study the shape and size of the saw tooth profile of serrated chips. It was found that by increasing cutting speed from 95 m/min to 160 m/min, the free surface lamella of the chips increased and the visibility of the saw tooth segment became clearer.
Electro-thermal battery model identification for automotive applications
NASA Astrophysics Data System (ADS)
Hu, Y.; Yurkovich, S.; Guezennec, Y.; Yurkovich, B. J.
This paper describes a model identification procedure for identifying an electro-thermal model of lithium ion batteries used in automotive applications. The dynamic model structure adopted is based on an equivalent circuit model whose parameters are scheduled on the state-of-charge, temperature, and current direction. Linear spline functions are used as the functional form for the parametric dependence. The model identified in this way is valid inside a large range of temperatures and state-of-charge, so that the resulting model can be used for automotive applications such as on-board estimation of the state-of-charge and state-of-health. The model coefficients are identified using a multiple step genetic algorithm based optimization procedure designed for large scale optimization problems. The validity of the procedure is demonstrated experimentally for an A123 lithium ion iron-phosphate battery.
NASA Astrophysics Data System (ADS)
Brzęczek, Mateusz; Bartela, Łukasz
2013-12-01
This paper presents the parameters of the reference oxy combustion block operating with supercritical steam parameters, equipped with an air separation unit and a carbon dioxide capture and compression installation. The possibility to recover the heat in the analyzed power plant is discussed. The decision variables and the thermodynamic functions for the optimization algorithm were identified. The principles of operation of genetic algorithm and methodology of conducted calculations are presented. The sensitivity analysis was performed for the best solutions to determine the effects of the selected variables on the power and efficiency of the unit. Optimization of the heat recovery from the air separation unit, flue gas condition and CO2 capture and compression installation using genetic algorithm was designed to replace the low-pressure section of the regenerative water heaters of steam cycle in analyzed unit. The result was to increase the power and efficiency of the entire power plant.
Zonta, Zivko J; Flotats, Xavier; Magrí, Albert
2014-08-01
The procedure commonly used for the assessment of the parameters included in activated sludge models (ASMs) relies on the estimation of their optimal value within a confidence region (i.e. frequentist inference). Once optimal values are estimated, parameter uncertainty is computed through the covariance matrix. However, alternative approaches based on the consideration of the model parameters as probability distributions (i.e. Bayesian inference), may be of interest. The aim of this work is to apply (and compare) both Bayesian and frequentist inference methods when assessing uncertainty for an ASM-type model, which considers intracellular storage and biomass growth, simultaneously. Practical identifiability was addressed exclusively considering respirometric profiles based on the oxygen uptake rate and with the aid of probabilistic global sensitivity analysis. Parameter uncertainty was thus estimated according to both the Bayesian and frequentist inferential procedures. Results were compared in order to evidence the strengths and weaknesses of both approaches. Since it was demonstrated that Bayesian inference could be reduced to a frequentist approach under particular hypotheses, the former can be considered as a more generalist methodology. Hence, the use of Bayesian inference is encouraged for tackling inferential issues in ASM environments.
Selection of optimal multispectral imaging system parameters for small joint arthritis detection
NASA Astrophysics Data System (ADS)
Dolenec, Rok; Laistler, Elmar; Stergar, Jost; Milanic, Matija
2018-02-01
Early detection and treatment of arthritis is essential for a successful outcome of the treatment, but it has proven to be very challenging with existing diagnostic methods. Novel methods based on the optical imaging of the affected joints are becoming an attractive alternative. A non-contact multispectral imaging (MSI) system for imaging of small joints of human hands and feet is being developed. In this work, a numerical simulation of the MSI system is presented. The purpose of the simulation is to determine the optimal design parameters. Inflamed and unaffected human joint models were constructed with a realistic geometry and tissue distributions, based on a MRI scan of a human finger with a spatial resolution of 0.2 mm. The light transport simulation is based on a weighted-photon 3D Monte Carlo method utilizing CUDA GPU acceleration. An uniform illumination of the finger within the 400-1100 nm spectral range was simulated and the photons exiting the joint were recorded using different acceptance angles. From the obtained reflectance and transmittance images the spectral and spatial features most indicative of inflammation were identified. Optimal acceptance angle and spectral bands were determined. This study demonstrates that proper selection of MSI system parameters critically affects ability of a MSI system to discriminate the unaffected and inflamed joints. The presented system design optimization approach could be applied to other pathologies.
NASA Astrophysics Data System (ADS)
Maleque, M. A.; Bello, K. A.; Adebisi, A. A.; Akma, N.
2017-03-01
Tungsten inert gas (TIG) torch is one of the most recently used heat source for surface modification of engineering parts, giving similar results to the more expensive high power laser technique. In this study, ceramic-based embedded composite coating has been produced by precoated silicon carbide (SiC) powders on the AISI 4340 low alloy steel substrate using TIG welding torch process. A design of experiment based on Taguchi approach has been adopted to optimize the TIG cladding process parameters. The L9 orthogonal array and the signal-to-noise was used to study the effect of TIG welding parameters such as arc current, travelling speed, welding voltage and argon flow rate on tribological response behaviour (wear rate, surface roughness and wear track width). The objective of the study was to identify optimal design parameter that significantly minimizes each of the surface quality characteristics. The analysis of the experimental results revealed that the argon flow rate was found to be the most influential factor contributing to the minimum wear and surface roughness of the modified coating surface. On the other hand, the key factor in reducing wear scar is the welding voltage. Finally, a convenient and economical Taguchi approach used in this study was efficient to find out optimal factor settings for obtaining minimum wear rate, wear scar and surface roughness responses in TIG-coated surfaces.
NASA Astrophysics Data System (ADS)
Neverov, V. V.; Kozhukhov, Y. V.; Yablokov, A. M.; Lebedev, A. A.
2017-08-01
Nowadays the optimization using computational fluid dynamics (CFD) plays an important role in the design process of turbomachines. However, for the successful and productive optimization it is necessary to define a simulation model correctly and rationally. The article deals with the choice of a grid and computational domain parameters for optimization of centrifugal compressor impellers using computational fluid dynamics. Searching and applying optimal parameters of the grid model, the computational domain and solver settings allows engineers to carry out a high-accuracy modelling and to use computational capability effectively. The presented research was conducted using Numeca Fine/Turbo package with Spalart-Allmaras and Shear Stress Transport turbulence models. Two radial impellers was investigated: the high-pressure at ψT=0.71 and the low-pressure at ψT=0.43. The following parameters of the computational model were considered: the location of inlet and outlet boundaries, type of mesh topology, size of mesh and mesh parameter y+. Results of the investigation demonstrate that the choice of optimal parameters leads to the significant reduction of the computational time. Optimal parameters in comparison with non-optimal but visually similar parameters can reduce the calculation time up to 4 times. Besides, it is established that some parameters have a major impact on the result of modelling.
NASA Astrophysics Data System (ADS)
Li, Peng-fei; Zhou, Xiao-jun
2015-12-01
Subsea tunnel lining structures should be designed to sustain the loads transmitted from surrounding ground and groundwater during excavation. Extremely high pore-water pressure reduces the effective strength of the country rock that surrounds a tunnel, thereby lowering the arching effect and stratum stability of the structure. In this paper, the mechanical behavior and shape optimization of the lining structure for the Xiang'an tunnel excavated in weathered slots are examined. Eight cross sections with different geometric parameters are adopted to study the mechanical behavior and shape optimization of the lining structure. The hyperstatic reaction method is used through finite element analysis software ANSYS. The mechanical behavior of the lining structure is evidently affected by the geometric parameters of crosssectional shape. The minimum safety factor of the lining structure elements is set to be the objective function. The efficient tunnel shape to maximize the minimum safety factor is identified. The minimum safety factor increases significantly after optimization. The optimized cross section significantly improves the mechanical characteristics of the lining structure and effectively reduces its deformation. Force analyses of optimization process and program are conducted parametrically so that the method can be applied to the optimization design of other similar structures. The results obtained from this study enhance our understanding of the mechanical behavior of the lining structure for subsea tunnels. These results are also beneficial to the optimal design of lining structures in general.
Sensitivity analysis of pars-tensa young's modulus estimation using inverse finite-element modeling
NASA Astrophysics Data System (ADS)
Rohani, S. Alireza; Elfarnawany, Mai; Agrawal, Sumit K.; Ladak, Hanif M.
2018-05-01
Accurate estimates of the pars-tensa (PT) Young's modulus (EPT) are required in finite-element (FE) modeling studies of the middle ear. Previously, we introduced an in-situ EPT estimation technique by optimizing a sample-specific FE model to match experimental eardrum pressurization data. This optimization process requires choosing some modeling assumptions such as PT thickness and boundary conditions. These assumptions are reported with a wide range of variation in the literature, hence affecting the reliability of the models. In addition, the sensitivity of the estimated EPT to FE modeling assumptions has not been studied. Therefore, the objective of this study is to identify the most influential modeling assumption on EPT estimates. The middle-ear cavity extracted from a cadaveric temporal bone was pressurized to 500 Pa. The deformed shape of the eardrum after pressurization was measured using a Fourier transform profilometer (FTP). A base-line FE model of the unpressurized middle ear was created. The EPT was estimated using golden section optimization method, which minimizes the cost function comparing the deformed FE model shape to the measured shape after pressurization. The effect of varying the modeling assumptions on EPT estimates were investigated. This included the change in PT thickness, pars flaccida Young's modulus and possible FTP measurement error. The most influential parameter on EPT estimation was PT thickness and the least influential parameter was pars flaccida Young's modulus. The results of this study provide insight into how different parameters affect the results of EPT optimization and which parameters' uncertainties require further investigation to develop robust estimation techniques.
Eisenberg, Marisa C; Jain, Harsh V
2017-10-27
Mathematical modeling has a long history in the field of cancer therapeutics, and there is increasing recognition that it can help uncover the mechanisms that underlie tumor response to treatment. However, making quantitative predictions with such models often requires parameter estimation from data, raising questions of parameter identifiability and estimability. Even in the case of structural (theoretical) identifiability, imperfect data and the resulting practical unidentifiability of model parameters can make it difficult to infer the desired information, and in some cases, to yield biologically correct inferences and predictions. Here, we examine parameter identifiability and estimability using a case study of two compartmental, ordinary differential equation models of cancer treatment with drugs that are cell cycle-specific (taxol) as well as non-specific (oxaliplatin). We proceed through model building, structural identifiability analysis, parameter estimation, practical identifiability analysis and its biological implications, as well as alternative data collection protocols and experimental designs that render the model identifiable. We use the differential algebra/input-output relationship approach for structural identifiability, and primarily the profile likelihood approach for practical identifiability. Despite the models being structurally identifiable, we show that without consideration of practical identifiability, incorrect cell cycle distributions can be inferred, that would result in suboptimal therapeutic choices. We illustrate the usefulness of estimating practically identifiable combinations (in addition to the more typically considered structurally identifiable combinations) in generating biologically meaningful insights. We also use simulated data to evaluate how the practical identifiability of the model would change under alternative experimental designs. These results highlight the importance of understanding the underlying mechanisms rather than purely using parsimony or information criteria/goodness-of-fit to decide model selection questions. The overall roadmap for identifiability testing laid out here can be used to help provide mechanistic insight into complex biological phenomena, reduce experimental costs, and optimize model-driven experimentation. Copyright © 2017. Published by Elsevier Ltd.
Numerical optimization methods for controlled systems with parameters
NASA Astrophysics Data System (ADS)
Tyatyushkin, A. I.
2017-10-01
First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.
Jácome, Gabriel; Valarezo, Carla; Yoo, Changkyoo
2018-03-30
Pollution and the eutrophication process are increasing in lake Yahuarcocha and constant water quality monitoring is essential for a better understanding of the patterns occurring in this ecosystem. In this study, key sensor locations were determined using spatial and temporal analyses combined with geographical information systems (GIS) to assess the influence of weather features, anthropogenic activities, and other non-point pollution sources. A water quality monitoring network was established to obtain data on 14 physicochemical and microbiological parameters at each of seven sample sites over a period of 13 months. A spatial and temporal statistical approach using pattern recognition techniques, such as cluster analysis (CA) and discriminant analysis (DA), was employed to classify and identify the most important water quality parameters in the lake. The original monitoring network was reduced to four optimal sensor locations based on a fuzzy overlay of the interpolations of concentration variations of the most important parameters.
Wing optimization for space shuttle orbiter vehicles
NASA Technical Reports Server (NTRS)
Surber, T. E.; Bornemann, W. E.; Miller, W. D.
1972-01-01
The results were presented of a parametric study performed to determine the optimum wing geometry for a proposed space shuttle orbiter. The results of the study establish the minimum weight wing for a series of wing-fuselage combinations subject to constraints on aerodynamic heating, wing trailing edge sweep, and wing over-hang. The study consists of a generalized design evaluation which has the flexibility of arbitrarily varying those wing parameters which influence the vehicle system design and its performance. The study is structured to allow inputs of aerodynamic, weight, aerothermal, structural and material data in a general form so that the influence of these parameters on the design optimization process can be isolated and identified. This procedure displays the sensitivity of the system design of variations in wing geometry. The parameters of interest are varied in a prescribed fashion on a selected fuselage and the effect on the total vehicle weight is determined. The primary variables investigated are: wing loading, aspect ratio, leading edge sweep, thickness ratio, and taper ratio.
Wortmann, Birgit; Knorr, Jürgen
2012-08-01
In 2001 and 2003, at the University of Pavia, Italy, boron neutron capture therapy (BNCT) has been successfully used in the treatment of hepatic colorectal metastases (Pinelli et al., 2002; Zonta et al., 2006). The treatment procedure (TAOrMINA protocol) is characterised by the auto-transplantation and extracorporeal irradiation of the liver using a thermal neutron beam. The clinical use of this approach requires well founded data and an optimized irradiation facility. In order to start with this work and to decide upon its feasibility at the research reactor TRIGA Mainz, basic data and requirements have been considered (Wortmann, 2008). Computer calculations using the ATTILA (Transpire Inc. 2006) and MCNP (LANL, 2005) codes have been performed, including data from conventional radiation therapy, from the TAOrMINA approach, resulting in reasonable estimations. Basic data and requirements and optimal parameters have been worked out, especially for use at an optimized TRIGA irradiation facility (Wortmann, 2008). Advantages of the extracorporeal irradiation with auto-transplantation and the potential of an optimized irradiation facility could be identified. Within the requirements, turning the explanted organ over by 180° appears preferable to a whole side source, similar to a permanent rotation of the organ. The design study and the parameter optimization confirm the potential of this approach to treat metastases in explanted organs. The results do not represent actual treatment data but a first estimation. Although all specific values refer to the TRIGA Mainz, they may act as a useful guide for other types of neutron sources. The recommended modifications (Wortmann, 2008) show the suitability of TRIGA reactors as a radiation source for BNCT of extracorporeal irradiated and auto-transplanted organs. Copyright © 2012 Elsevier Ltd. All rights reserved.
Machining Parameters Optimization using Hybrid Firefly Algorithm and Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Farahlina Johari, Nur; Zain, Azlan Mohd; Haszlinna Mustaffa, Noorfa; Udin, Amirmudin
2017-09-01
Firefly Algorithm (FA) is a metaheuristic algorithm that is inspired by the flashing behavior of fireflies and the phenomenon of bioluminescent communication and the algorithm is used to optimize the machining parameters (feed rate, depth of cut, and spindle speed) in this research. The algorithm is hybridized with Particle Swarm Optimization (PSO) to discover better solution in exploring the search space. Objective function of previous research is used to optimize the machining parameters in turning operation. The optimal machining cutting parameters estimated by FA that lead to a minimum surface roughness are validated using ANOVA test.
Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses
ERIC Educational Resources Information Center
Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu
2011-01-01
Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…
Arbitrary-quantum-state preparation of a harmonic oscillator via optimal control
NASA Astrophysics Data System (ADS)
Rojan, Katharina; Reich, Daniel M.; Dotsenko, Igor; Raimond, Jean-Michel; Koch, Christiane P.; Morigi, Giovanna
2014-08-01
The efficient initialization of a quantum system is a prerequisite for quantum technological applications. Here we show that several classes of quantum states of a harmonic oscillator can be efficiently prepared by means of a Jaynes-Cummings interaction with a single two-level system. This is achieved by suitably tailoring external fields which drive the dipole and/or the oscillator. The time-dependent dynamics that leads to the target state is identified by means of optimal control theory (OCT) based on Krotov's method. Infidelities below 10-4 can be reached for the parameters of the experiment of Raimond, Haroche, Brune and co-workers, where the oscillator is a mode of a high-Q microwave cavity and the dipole is a Rydberg transition of an atom. For this specific situation we analyze the limitations on the fidelity due to parameter fluctuations and identify robust dynamics based on pulses found using ensemble OCT. Our analysis can be extended to quantum-state preparation of continuous-variable systems in other platforms, such as trapped ions and circuit QED.
Empirical scoring functions for advanced protein-ligand docking with PLANTS.
Korb, Oliver; Stützle, Thomas; Exner, Thomas E
2009-01-01
In this paper we present two empirical scoring functions, PLANTS(CHEMPLP) and PLANTS(PLP), designed for our docking algorithm PLANTS (Protein-Ligand ANT System), which is based on ant colony optimization (ACO). They are related, regarding their functional form, to parts of already published scoring functions and force fields. The parametrization procedure described here was able to identify several parameter settings showing an excellent performance for the task of pose prediction on two test sets comprising 298 complexes in total. Up to 87% of the complexes of the Astex diverse set and 77% of the CCDC/Astex clean listnc (noncovalently bound complexes of the clean list) could be reproduced with root-mean-square deviations of less than 2 A with respect to the experimentally determined structures. A comparison with the state-of-the-art docking tool GOLD clearly shows that this is, especially for the druglike Astex diverse set, an improvement in pose prediction performance. Additionally, optimized parameter settings for the search algorithm were identified, which can be used to balance pose prediction reliability and search speed.
Zhao, Yimeng; Sun, Liangliang; Zhu, Guijie; Dovichi, Norman J
2016-10-07
We used reversed-phase liquid chromatography to separate the yeast proteome into 23 fractions. These fractions were then analyzed using capillary zone electrophoresis (CZE) coupled to a Q-Exactive HF mass spectrometer using an electrokinetically pumped sheath flow interface. The parameters of the mass spectrometer were first optimized for top-down proteomics using a mixture of seven model proteins; we observed that intact protein mode with a trapping pressure of 0.2 and normalized collision energy of 20% produced the highest intact protein signals and most protein identifications. Then, we applied the optimized parameters for analysis of the fractionated yeast proteome. From this, 580 proteoforms and 180 protein groups were identified via database searching of the MS/MS spectra. This number of proteoform identifications is two times larger than that of previous CZE-MS/MS studies. An additional 3,243 protein species were detected based on the parent ion spectra. Post-translational modifications including N-terminal acetylation, signal peptide removal, and oxidation were identified.
Böl, Markus; Kruse, Roland; Ehret, Alexander E; Leichsenring, Kay; Siebert, Tobias
2012-10-11
Due to the increasing developments in modelling of biological material, adequate parameter identification techniques are urgently needed. The majority of recent contributions on passive muscle tissue identify material parameters solely by comparing characteristic, compressive stress-stretch curves from experiments and simulation. In doing so, different assumptions concerning e.g. the sample geometry or the degree of friction between the sample and the platens are required. In most cases these assumptions are grossly simplified leading to incorrect material parameters. In order to overcome such oversimplifications, in this paper a more reliable parameter identification technique is presented: we use the inverse finite element method (iFEM) to identify the optimal parameter set by comparison of the compressive stress-stretch response including the realistic geometries of the samples and the presence of friction at the compressed sample faces. Moreover, we judge the quality of the parameter identification by comparing the simulated and experimental deformed shapes of the samples. Besides this, the study includes a comprehensive set of compressive stress-stretch data on rabbit soleus muscle and the determination of static friction coefficients between muscle and PTFE. Copyright © 2012 Elsevier Ltd. All rights reserved.
A simple approach for the modeling of an ODS steel mechanical behavior in pilgering conditions
NASA Astrophysics Data System (ADS)
Vanegas-Márquez, E.; Mocellin, K.; Toualbi, L.; de Carlan, Y.; Logé, R. E.
2012-01-01
The optimization of the forming of ODS tubes is linked to the choice of an appropriated constitutive model for modeling the metal forming process. In the framework of a unified plastic constitutive theory, the strain-controlled cyclic characteristics of a ferritic ODS steel were analyzed and modeled with two different tests. The first test is a classical tension-compression test, and leads to cyclic softening at low to intermediate strain amplitudes. The second test consists in alternated uniaxial compressions along two perpendicular axes, and is selected based on the similarities with the loading path induced by the Fe-14Cr-1W-Ti ODS cladding tube pilgering process. This second test exhibits cyclic hardening at all tested strain amplitudes. Since variable strain amplitudes prevail in pilgering conditions, the parameters of the considered constitutive law were identified based on a loading sequence including strain amplitude changes. A proposed semi automated inverse analysis methodology is shown to efficiently provide optimal sets of parameters for the considered loading sequences. When compared to classical approaches, the model involves a reduced number of parameters, while keeping a good ability to capture stress changes induced by strain amplitude changes. Furthermore, the methodology only requires one test, which is an advantage when the amount of available material is limited. As two distinct sets of parameters were identified for the two considered tests, it is recommended to consider the loading path when modeling cold forming of the ODS steel.
Allawi, Mohammed Falah; Jaafar, Othman; Mohamad Hamzah, Firdaus; Abdullah, Sharifah Mastura Syed; El-Shafie, Ahmed
2018-05-01
Efficacious operation for dam and reservoir system could guarantee not only a defenselessness policy against natural hazard but also identify rule to meet the water demand. Successful operation of dam and reservoir systems to ensure optimal use of water resources could be unattainable without accurate and reliable simulation models. According to the highly stochastic nature of hydrologic parameters, developing accurate predictive model that efficiently mimic such a complex pattern is an increasing domain of research. During the last two decades, artificial intelligence (AI) techniques have been significantly utilized for attaining a robust modeling to handle different stochastic hydrological parameters. AI techniques have also shown considerable progress in finding optimal rules for reservoir operation. This review research explores the history of developing AI in reservoir inflow forecasting and prediction of evaporation from a reservoir as the major components of the reservoir simulation. In addition, critical assessment of the advantages and disadvantages of integrated AI simulation methods with optimization methods has been reported. Future research on the potential of utilizing new innovative methods based AI techniques for reservoir simulation and optimization models have also been discussed. Finally, proposal for the new mathematical procedure to accomplish the realistic evaluation of the whole optimization model performance (reliability, resilience, and vulnerability indices) has been recommended.
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.
Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem
NASA Astrophysics Data System (ADS)
Skakov, E. S.; Malysh, V. N.
2018-03-01
The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.
NASA Technical Reports Server (NTRS)
Pindera, Marek-Jerzy; Salzar, Robert S.
1996-01-01
The objective of this work was the development of efficient, user-friendly computer codes for optimizing fabrication-induced residual stresses in metal matrix composites through the use of homogeneous and heterogeneous interfacial layer architectures and processing parameter variation. To satisfy this objective, three major computer codes have been developed and delivered to the NASA-Lewis Research Center, namely MCCM, OPTCOMP, and OPTCOMP2. MCCM is a general research-oriented code for investigating the effects of microstructural details, such as layered morphology of SCS-6 SiC fibers and multiple homogeneous interfacial layers, on the inelastic response of unidirectional metal matrix composites under axisymmetric thermomechanical loading. OPTCOMP and OPTCOMP2 combine the major analysis module resident in MCCM with a commercially-available optimization algorithm and are driven by user-friendly interfaces which facilitate input data construction and program execution. OPTCOMP enables the user to identify those dimensions, geometric arrangements and thermoelastoplastic properties of homogeneous interfacial layers that minimize thermal residual stresses for the specified set of constraints. OPTCOMP2 provides additional flexibility in the residual stress optimization through variation of the processing parameters (time, temperature, external pressure and axial load) as well as the microstructure of the interfacial region which is treated as a heterogeneous two-phase composite. Overviews of the capabilities of these codes are provided together with a summary of results that addresses the effects of various microstructural details of the fiber, interfacial layers and matrix region on the optimization of fabrication-induced residual stresses in metal matrix composites.
Identifying the optimal segmentors for mass classification in mammograms
NASA Astrophysics Data System (ADS)
Zhang, Yu; Tomuro, Noriko; Furst, Jacob; Raicu, Daniela S.
2015-03-01
In this paper, we present the results of our investigation on identifying the optimal segmentor(s) from an ensemble of weak segmentors, used in a Computer-Aided Diagnosis (CADx) system which classifies suspicious masses in mammograms as benign or malignant. This is an extension of our previous work, where we used various parameter settings of image enhancement techniques to each suspicious mass (region of interest (ROI)) to obtain several enhanced images, then applied segmentation to each image to obtain several contours of a given mass. Each segmentation in this ensemble is essentially a "weak segmentor" because no single segmentation can produce the optimal result for all images. Then after shape features are computed from the segmented contours, the final classification model was built using logistic regression. The work in this paper focuses on identifying the optimal segmentor(s) from an ensemble mix of weak segmentors. For our purpose, optimal segmentors are those in the ensemble mix which contribute the most to the overall classification rather than the ones that produced high precision segmentation. To measure the segmentors' contribution, we examined weights on the features in the derived logistic regression model and computed the average feature weight for each segmentor. The result showed that, while in general the segmentors with higher segmentation success rates had higher feature weights, some segmentors with lower segmentation rates had high classification feature weights as well.
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.
2017-02-01
A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.
Properties of nucleon resonances by means of a genetic algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandez-Ramirez, C.; Moya de Guerra, E.; Instituto de Estructura de la Materia, CSIC, Serrano 123, E-28006 Madrid
2008-06-15
We present an optimization scheme that employs a genetic algorithm (GA) to determine the properties of low-lying nucleon excitations within a realistic photo-pion production model based upon an effective Lagrangian. We show that with this modern optimization technique it is possible to reliably assess the parameters of the resonances and the associated error bars as well as to identify weaknesses in the models. To illustrate the problems the optimization process may encounter, we provide results obtained for the nucleon resonances {delta}(1230) and {delta}(1700). The former can be easily isolated and thus has been studied in depth, while the latter ismore » not as well known experimentally.« less
Optimization of Milling Parameters Employing Desirability Functions
NASA Astrophysics Data System (ADS)
Ribeiro, J. L. S.; Rubio, J. C. Campos; Abrão, A. M.
2011-01-01
The principal aim of this paper is to investigate the influence of tool material (one cermet and two coated carbide grades), cutting speed and feed rate on the machinability of hardened AISI H13 hot work steel, in order to identify the cutting conditions which lead to optimal performance. A multiple response optimization procedure based on tool life, surface roughness, milling forces and the machining time (required to produce a sample cavity) was employed. The results indicated that the TiCN-TiN coated carbide and cermet presented similar results concerning the global optimum values for cutting speed and feed rate per tooth, outperforming the TiN-TiCN-Al2O3 coated carbide tool.
2017-01-01
Objective To compare swallowing function between healthy subjects and patients with pharyngeal dysphagia using high resolution manometry (HRM) and to evaluate the usefulness of HRM for detecting pharyngeal dysphagia. Methods Seventy-five patients with dysphagia and 28 healthy subjects were included in this study. Diagnosis of dysphagia was confirmed by a videofluoroscopy. HRM was performed to measure pressure and timing information at the velopharynx (VP), tongue base (TB), and upper esophageal sphincter (UES). HRM parameters were compared between dysphagia and healthy groups. Optimal threshold values of significant HRM parameters for dysphagia were determined. Results VP maximal pressure, TB maximal pressure, UES relaxation duration, and UES resting pressure were lower in the dysphagia group than those in healthy group. UES minimal pressure was higher in dysphagia group than in the healthy group. Receiver operating characteristic (ROC) analyses were conducted to validate optimal threshold values for significant HRM parameters to identify patients with pharyngeal dysphagia. With maximal VP pressure at a threshold value of 144.0 mmHg, dysphagia was identified with 96.4% sensitivity and 74.7% specificity. With maximal TB pressure at a threshold value of 158.0 mmHg, dysphagia was identified with 96.4% sensitivity and 77.3% specificity. At a threshold value of 2.0 mmHg for UES minimal pressure, dysphagia was diagnosed at 74.7% sensitivity and 60.7% specificity. Lastly, UES relaxation duration of <0.58 seconds had 85.7% sensitivity and 65.3% specificity, and UES resting pressure of <75.0 mmHg had 89.3% sensitivity and 90.7% specificity for identifying dysphagia. Conclusion We present evidence that HRM could be a useful evaluation tool for detecting pharyngeal dysphagia. PMID:29201816
Park, Chul-Hyun; Kim, Don-Kyu; Lee, Yong-Taek; Yi, Youbin; Lee, Jung-Sang; Kim, Kunwoo; Park, Jung Ho; Yoon, Kyung Jae
2017-10-01
To compare swallowing function between healthy subjects and patients with pharyngeal dysphagia using high resolution manometry (HRM) and to evaluate the usefulness of HRM for detecting pharyngeal dysphagia. Seventy-five patients with dysphagia and 28 healthy subjects were included in this study. Diagnosis of dysphagia was confirmed by a videofluoroscopy. HRM was performed to measure pressure and timing information at the velopharynx (VP), tongue base (TB), and upper esophageal sphincter (UES). HRM parameters were compared between dysphagia and healthy groups. Optimal threshold values of significant HRM parameters for dysphagia were determined. VP maximal pressure, TB maximal pressure, UES relaxation duration, and UES resting pressure were lower in the dysphagia group than those in healthy group. UES minimal pressure was higher in dysphagia group than in the healthy group. Receiver operating characteristic (ROC) analyses were conducted to validate optimal threshold values for significant HRM parameters to identify patients with pharyngeal dysphagia. With maximal VP pressure at a threshold value of 144.0 mmHg, dysphagia was identified with 96.4% sensitivity and 74.7% specificity. With maximal TB pressure at a threshold value of 158.0 mmHg, dysphagia was identified with 96.4% sensitivity and 77.3% specificity. At a threshold value of 2.0 mmHg for UES minimal pressure, dysphagia was diagnosed at 74.7% sensitivity and 60.7% specificity. Lastly, UES relaxation duration of <0.58 seconds had 85.7% sensitivity and 65.3% specificity, and UES resting pressure of <75.0 mmHg had 89.3% sensitivity and 90.7% specificity for identifying dysphagia. We present evidence that HRM could be a useful evaluation tool for detecting pharyngeal dysphagia.
A theoretical investigation of chirp insonification of ultrasound contrast agents.
Barlow, Euan; Mulholland, Anthony J; Gachagan, Anthony; Nordon, Alison
2011-08-01
A theoretical investigation of second harmonic imaging of an Ultrasound Contrast Agent (UCA) under chirp insonification is considered. By solving the UCA's dynamical equation analytically, the effect that the chirp signal parameters and the UCA shell parameters have on the amplitude of the second harmonic frequency are examined. This allows optimal parameter values to be identified which maximise the UCA's second harmonic response. A relationship is found for the chirp parameters which ensures that a signal can be designed to resonate a UCA for a given set of shell parameters. It is also shown that the shell thickness, shell viscosity and shell elasticity parameter should be as small as realistically possible in order to maximise the second harmonic amplitude. Keller-Herring, Second Harmonic, Chirp, Ultrasound Contrast Agent. Copyright © 2011 Elsevier B.V. All rights reserved.
Optimal Designs for the Rasch Model
ERIC Educational Resources Information Center
Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer
2012-01-01
In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…
Optimization Under Uncertainty for Electronics Cooling Design
NASA Astrophysics Data System (ADS)
Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.
Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...
NASA Astrophysics Data System (ADS)
Bencherif, H.; Djeffal, F.; Ferhati, H.
2016-09-01
This paper presents a hybrid approach based on an analytical and metaheuristic investigation to study the impact of the interdigitated electrodes engineering on both speed and optical performance of an Interdigitated Metal-Semiconductor-Metal Ultraviolet Photodetector (IMSM-UV-PD). In this context, analytical models regarding the speed and optical performance have been developed and validated by experimental results, where a good agreement has been recorded. Moreover, the developed analytical models have been used as objective functions to determine the optimized design parameters, including the interdigit configuration effect, via a Multi-Objective Genetic Algorithm (MOGA). The ultimate goal of the proposed hybrid approach is to identify the optimal design parameters associated with the maximum of electrical and optical device performance. The optimized IMSM-PD not only reveals superior performance in terms of photocurrent and response time, but also illustrates higher optical reliability against the optical losses due to the active area shadowing effects. The advantages offered by the proposed design methodology suggest the possibility to overcome the most challenging problem with the communication speed and power requirements of the UV optical interconnect: high derived current and commutation speed in the UV receiver.
Study on loading path optimization of internal high pressure forming process
NASA Astrophysics Data System (ADS)
Jiang, Shufeng; Zhu, Hengda; Gao, Fusheng
2017-09-01
In the process of internal high pressure forming, there is no formula to describe the process parameters and forming results. The article use numerical simulation to obtain several input parameters and corresponding output result, use the BP neural network to found their mapping relationship, and with weighted summing method make each evaluating parameters to set up a formula which can evaluate quality. Then put the training BP neural network into the particle swarm optimization, and take the evaluating formula of the quality as adapting formula of particle swarm optimization, finally do the optimization and research at the range of each parameters. The results show that the parameters obtained by the BP neural network algorithm and the particle swarm optimization algorithm can meet the practical requirements. The method can solve the optimization of the process parameters in the internal high pressure forming process.
Semi-physical parameter identification for an iron-loss formula allowing loss-separation
NASA Astrophysics Data System (ADS)
Steentjes, S.; Leßmann, M.; Hameyer, K.
2013-05-01
This paper presents a semi-physical parameter identification for a recently proposed enhanced iron-loss formula, the IEM-Formula. Measurements are performed on a standardized Epstein frame by the conventional field-metric method under sinusoidal magnetic flux densities up to high magnitudes and frequencies. Quasi-static losses are identified on the one hand by point-by-point dc-measurements using a flux-meter and on the other hand by extrapolating higher frequency measurements to dc magnetization using the statistical loss-separation theory (Jacobs et al., "Magnetic material optimization for hybrid vehicle PMSM drives," in Inductica Conference, CD-Rom, Chicago/USA, 2009). Utilizing this material information, possibilities to identify the parameter of the IEM-Formula are analyzed. Along with this, the importance of excess losses in present-day non-grain oriented Fe-Si laminations is investigated. In conclusion, the calculated losses are compared to the measured losses.
Effect of microwave argon plasma on the glycosidic and hydrogen bonding system of cotton cellulose.
Prabhu, S; Vaideki, K; Anitha, S
2017-01-20
Cotton fabric was processed with microwave (Ar) plasma to alter its hydrophilicity. The process parameters namely microwave power, process gas pressure and processing time were optimized using Box-Behnken method available in the Design Expert software. It was observed that certain combinations of process parameters improved existing hydrophilicity while the other combinations decreased it. ATR-FTIR spectral analysis was used to identify the strain induced in inter chain, intra chain, and inter sheet hydrogen bond and glycosidic covalent bond due to plasma treatment. X-ray diffraction (XRD) studies was used to analyze the effect of plasma on unit cell parameters and degree of crystallinity. Fabric surface etching was identified using FESEM analysis. Thus, it can be concluded that the increase/decrease in the hydrophilicity of the plasma treated fabric was due to these structural and physical changes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Need for Cost Optimization of Space Life Support Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Anderson, Grant
2017-01-01
As the nation plans manned missions that go far beyond Earth orbit to Mars, there is an urgent need for a robust, disciplined systems engineering methodology that can identify an optimized Environmental Control and Life Support (ECLSS) architecture for long duration deep space missions. But unlike the previously used Equivalent System Mass (ESM), the method must be inclusive of all driving parameters and emphasize the economic analysis of life support system design. The key parameter for this analysis is Life Cycle Cost (LCC). LCC takes into account the cost for development and qualification of the system, launch costs, operational costs, maintenance costs and all other relevant and associated costs. Additionally, an effective methodology must consider system technical performance, safety, reliability, maintainability, crew time, and other factors that could affect the overall merit of the life support system.
Kusumoto, Dai; Lachmann, Mark; Kunihiro, Takeshi; Yuasa, Shinsuke; Kishino, Yoshikazu; Kimura, Mai; Katsuki, Toshiomi; Itoh, Shogo; Seki, Tomohisa; Fukuda, Keiichi
2018-06-05
Deep learning technology is rapidly advancing and is now used to solve complex problems. Here, we used deep learning in convolutional neural networks to establish an automated method to identify endothelial cells derived from induced pluripotent stem cells (iPSCs), without the need for immunostaining or lineage tracing. Networks were trained to predict whether phase-contrast images contain endothelial cells based on morphology only. Predictions were validated by comparison to immunofluorescence staining for CD31, a marker of endothelial cells. Method parameters were then automatically and iteratively optimized to increase prediction accuracy. We found that prediction accuracy was correlated with network depth and pixel size of images to be analyzed. Finally, K-fold cross-validation confirmed that optimized convolutional neural networks can identify endothelial cells with high performance, based only on morphology. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Applying machine learning to identify autistic adults using imitation: An exploratory study.
Li, Baihua; Sharma, Arjun; Meng, James; Purushwalkam, Senthil; Gowen, Emma
2017-01-01
Autism spectrum condition (ASC) is primarily diagnosed by behavioural symptoms including social, sensory and motor aspects. Although stereotyped, repetitive motor movements are considered during diagnosis, quantitative measures that identify kinematic characteristics in the movement patterns of autistic individuals are poorly studied, preventing advances in understanding the aetiology of motor impairment, or whether a wider range of motor characteristics could be used for diagnosis. The aim of this study was to investigate whether data-driven machine learning based methods could be used to address some fundamental problems with regard to identifying discriminative test conditions and kinematic parameters to classify between ASC and neurotypical controls. Data was based on a previous task where 16 ASC participants and 14 age, IQ matched controls observed then imitated a series of hand movements. 40 kinematic parameters extracted from eight imitation conditions were analysed using machine learning based methods. Two optimal imitation conditions and nine most significant kinematic parameters were identified and compared with some standard attribute evaluators. To our knowledge, this is the first attempt to apply machine learning to kinematic movement parameters measured during imitation of hand movements to investigate the identification of ASC. Although based on a small sample, the work demonstrates the feasibility of applying machine learning methods to analyse high-dimensional data and suggest the potential of machine learning for identifying kinematic biomarkers that could contribute to the diagnostic classification of autism.
Exploring silver as a contrast agent for contrast-enhanced dual-energy X-ray breast imaging
Tsourkas, A; Maidment, A D A
2014-01-01
Objective: Through prior monoenergetic modelling, we have identified silver as a potential alternative to iodine in dual-energy (DE) X-ray breast imaging. The purpose of this study was to compare the performance of silver and iodine contrast agents in a commercially available DE imaging system through a quantitative analysis of signal difference-to-noise ratio (SDNR). Methods: A polyenergetic simulation algorithm was developed to model the signal intensity and noise. The model identified the influence of various technique parameters on SDNR. The model was also used to identify the optimal imaging techniques for silver and iodine, so that the two contrast materials could be objectively compared. Results: The major influences on the SDNR were the low-energy dose fraction and breast thickness. An increase in the value of either of these parameters resulted in a decrease in SDNR. The SDNR for silver was on average 43% higher than that for iodine when imaged at their respective optimal conditions, and 40% higher when both were imaged at the optimal conditions for iodine. Conclusion: A silver contrast agent should provide benefit over iodine, even when translated to the clinic without modification of imaging system or protocol. If the system were slightly modified to reflect the lower k-edge of silver, the difference in SDNR between the two materials would be increased. Advances in knowledge: These data are the first to demonstrate the suitability of silver as a contrast material in a clinical contrast-enhanced DE image acquisition system. PMID:24998157
Chaos minimization in DC-DC boost converter using circuit parameter optimization
NASA Astrophysics Data System (ADS)
Sudhakar, N.; Natarajan, Rajasekar; Gourav, Kumar; Padmavathi, P.
2017-11-01
DC-DC converters are prone to several types of nonlinear phenomena including bifurcation, quasi periodicity, intermittency and chaos. These undesirable effects must be controlled for periodic operation of the converter to ensure the stability. In this paper an effective solution to control of chaos in solar fed DC-DC boost converter is proposed. Controlling of chaos is significantly achieved using optimal circuit parameters obtained through Bacterial Foraging Optimization Algorithm. The optimization renders the suitable parameters in minimum computational time. The obtained results are compared with the operation of traditional boost converter. Further the obtained results with BFA optimized parameter ensures the operations of the converter are within the controllable region. To elaborate the study of bifurcation analysis with optimized and unoptimized parameters are also presented.
Nicoulaud-Gouin, V; Garcia-Sanchez, L; Giacalone, M; Attard, J C; Martin-Garin, A; Bois, F Y
2016-10-01
This paper addresses the methodological conditions -particularly experimental design and statistical inference- ensuring the identifiability of sorption parameters from breakthrough curves measured during stirred flow-through reactor experiments also known as continuous flow stirred-tank reactor (CSTR) experiments. The equilibrium-kinetic (EK) sorption model was selected as nonequilibrium parameterization embedding the K d approach. Parameter identifiability was studied formally on the equations governing outlet concentrations. It was also studied numerically on 6 simulated CSTR experiments on a soil with known equilibrium-kinetic sorption parameters. EK sorption parameters can not be identified from a single breakthrough curve of a CSTR experiment, because K d,1 and k - were diagnosed collinear. For pairs of CSTR experiments, Bayesian inference allowed to select the correct models of sorption and error among sorption alternatives. Bayesian inference was conducted with SAMCAT software (Sensitivity Analysis and Markov Chain simulations Applied to Transfer models) which launched the simulations through the embedded simulation engine GNU-MCSim, and automated their configuration and post-processing. Experimental designs consisting in varying flow rates between experiments reaching equilibrium at contamination stage were found optimal, because they simultaneously gave accurate sorption parameters and predictions. Bayesian results were comparable to maximum likehood method but they avoided convergence problems, the marginal likelihood allowed to compare all models, and credible interval gave directly the uncertainty of sorption parameters θ. Although these findings are limited to the specific conditions studied here, in particular the considered sorption model, the chosen parameter values and error structure, they help in the conception and analysis of future CSTR experiments with radionuclides whose kinetic behaviour is suspected. Copyright © 2016 Elsevier Ltd. All rights reserved.
León Blanco, José M; González-R, Pedro L; Arroyo García, Carmen Martina; Cózar-Bernal, María José; Calle Suárez, Marcos; Canca Ortiz, David; Rabasco Álvarez, Antonio María; González Rodríguez, María Luisa
2018-01-01
This work was aimed at determining the feasibility of artificial neural networks (ANN) by implementing backpropagation algorithms with default settings to generate better predictive models than multiple linear regression (MLR) analysis. The study was hypothesized on timolol-loaded liposomes. As tutorial data for ANN, causal factors were used, which were fed into the computer program. The number of training cycles has been identified in order to optimize the performance of the ANN. The optimization was performed by minimizing the error between the predicted and real response values in the training step. The results showed that training was stopped at 10 000 training cycles with 80% of the pattern values, because at this point the ANN generalizes better. Minimum validation error was achieved at 12 hidden neurons in a single layer. MLR has great prediction ability, with errors between predicted and real values lower than 1% in some of the parameters evaluated. Thus, the performance of this model was compared to that of the MLR using a factorial design. Optimal formulations were identified by minimizing the distance among measured and theoretical parameters, by estimating the prediction errors. Results indicate that the ANN shows much better predictive ability than the MLR model. These findings demonstrate the increased efficiency of the combination of ANN and design of experiments, compared to the conventional MLR modeling techniques.
Jin, Cheng; Stein, Gregory J; Hong, Kyung-Han; Lin, C D
2015-07-24
We investigate the efficient generation of low-divergence high-order harmonics driven by waveform-optimized laser pulses in a gas-filled hollow waveguide. The drive waveform is obtained by synthesizing two-color laser pulses, optimized such that highest harmonic yields are emitted from each atom. Optimization of the gas pressure and waveguide configuration has enabled us to produce bright and spatially coherent harmonics extending from the extreme ultraviolet to soft x rays. Our study on the interplay among waveguide mode, atomic dispersion, and plasma effect uncovers how dynamic phase matching is accomplished and how an optimized waveform is maintained when optimal waveguide parameters (radius and length) and gas pressure are identified. Our analysis should help laboratory development in the generation of high-flux bright coherent soft x rays as tabletop light sources for applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sethuraman, Latha; Fingersh, Lee J; Dykes, Katherine L
As wind turbine blade diameters and tower height increase to capture more energy in the wind, higher structural loads results in more structural support material increasing the cost of scaling. Weight reductions in the generator transfer to overall cost savings of the system. Additive manufacturing facilitates a design-for-functionality approach, thereby removing traditional manufacturing constraints and labor costs. The most feasible additive manufacturing technology identified for large, direct-drive generators in this study is powder-binder jetting of a sand cast mold. A parametric finite element analysis optimization study is performed, optimizing for mass and deformation. Also, topology optimization is employed for eachmore » parameter-optimized design.The optimized U-beam spoked web design results in a 24 percent reduction in structural mass of the rotor and 60 percent reduction in radial deflection.« less
All-Optical Implementation of the Ant Colony Optimization Algorithm
Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare
2016-01-01
We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems. PMID:27222098
Optimal Bayesian Adaptive Design for Test-Item Calibration.
van der Linden, Wim J; Ren, Hao
2015-06-01
An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.
Application of genetic algorithm in modeling on-wafer inductors for up to 110 Ghz
NASA Astrophysics Data System (ADS)
Liu, Nianhong; Fu, Jun; Liu, Hui; Cui, Wenpu; Liu, Zhihong; Liu, Linlin; Zhou, Wei; Wang, Quan; Guo, Ao
2018-05-01
In this work, the genetic algorithm has been introducted into parameter extraction for on-wafer inductors for up to 110 GHz millimeter-wave operations, and nine independent parameters of the equivalent circuit model are optimized together. With the genetic algorithm, the model with the optimized parameters gives a better fitting accuracy than the preliminary parameters without optimization. Especially, the fitting accuracy of the Q value achieves a significant improvement after the optimization.
Modeling the biomechanical and injury response of human liver parenchyma under tensile loading.
Untaroiu, Costin D; Lu, Yuan-Chiao; Siripurapu, Sundeep K; Kemper, Andrew R
2015-01-01
The rapid advancement in computational power has made human finite element (FE) models one of the most efficient tools for assessing the risk of abdominal injuries in a crash event. In this study, specimen-specific FE models were employed to quantify material and failure properties of human liver parenchyma using a FE optimization approach. Uniaxial tensile tests were performed on 34 parenchyma coupon specimens prepared from two fresh human livers. Each specimen was tested to failure at one of four loading rates (0.01s(-1), 0.1s(-1), 1s(-1), and 10s(-1)) to investigate the effects of rate dependency on the biomechanical and failure response of liver parenchyma. Each test was simulated by prescribing the end displacements of specimen-specific FE models based on the corresponding test data. The parameters of a first-order Ogden material model were identified for each specimen by a FE optimization approach while simulating the pre-tear loading region. The mean material model parameters were then determined for each loading rate from the characteristic averages of the stress-strain curves, and a stochastic optimization approach was utilized to determine the standard deviations of the material model parameters. A hyperelastic material model using a tabulated formulation for rate effects showed good predictions in terms of tensile material properties of human liver parenchyma. Furthermore, the tissue tearing was numerically simulated using a cohesive zone modeling (CZM) approach. A layer of cohesive elements was added at the failure location, and the CZM parameters were identified by fitting the post-tear force-time history recorded in each test. The results show that the proposed approach is able to capture both the biomechanical and failure response, and accurately model the overall force-deflection response of liver parenchyma over a large range of tensile loadings rates. Copyright © 2014 Elsevier Ltd. All rights reserved.
Pervez, Hifsa; Mozumder, Mohammad S.; Mourad, Abdel-Hamid I.
2016-01-01
The current study presents an investigation on the optimization of injection molding parameters of HDPE/TiO2 nanocomposites using grey relational analysis with the Taguchi method. Four control factors, including filler concentration (i.e., TiO2), barrel temperature, residence time and holding time, were chosen at three different levels of each. Mechanical properties, such as yield strength, Young’s modulus and elongation, were selected as the performance targets. Nine experimental runs were carried out based on the Taguchi L9 orthogonal array, and the data were processed according to the grey relational steps. The optimal process parameters were found based on the average responses of the grey relational grades, and the ideal operating conditions were found to be a filler concentration of 5 wt % TiO2, a barrel temperature of 225 °C, a residence time of 30 min and a holding time of 20 s. Moreover, analysis of variance (ANOVA) has also been applied to identify the most significant factor, and the percentage of TiO2 nanoparticles was found to have the most significant effect on the properties of the HDPE/TiO2 nanocomposites fabricated through the injection molding process. PMID:28773830
CEC-atmospheric pressure ionization MS of pesticides using a surfactant-bound monolithic column.
Gu, Congying; Shamsi, Shahab A
2010-04-01
A surfactant bound poly (11-acrylaminoundecanoic acid-ethylene dimethacrylate) monolithic column was simply prepared by in situ co-polymerization of 11-acrylaminoundecanoic acid and ethylene dimethacrylate with 1-propanol, 1,4-butanediol and water as porogens in 100 microm id fused-silica capillary in one step. This column was used in CEC-atmospheric pressure photoionization (APPI)-MS system for separation and detection of N-methylcarbamates pesticides. Numerous parameters are optimized for CEC-APPI-MS. After evaluation of the mobile phase composition, sheath liquid composition and the monolithic capillary outlet position, a fractional factorial design was selected as a screening procedure to identify factors of ionization source parameters, such as sheath liquid flow rate, drying gas flow rate, drying gas temperature, nebulizing gas pressure, vaporizer temperature and capillary voltage, which significantly influence APPI-MS sensitivity. A face-centered central composite design was further utilized to optimize the most significant parameters and predict the best sensitivity. Under optimized conditions, S/Ns around 78 were achieved for an injection of 100 ng/mL of each pesticide. Finally, this CEC-APPI-MS method was successfully applied to the analysis of nine N-methylcarbamates in spiked apple juice sample after solid phase extraction with recoveries in the range of 65-109%.
ConvAn: a convergence analyzing tool for optimization of biochemical networks.
Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils
2012-01-01
Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-31
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.
Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-01
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048
Optimization of microphysics in the Unified Model, using the Micro-genetic algorithm.
NASA Astrophysics Data System (ADS)
Jang, J.; Lee, Y.; Lee, H.; Lee, J.; Joo, S.
2016-12-01
This study focuses on parameter optimization of microphysics in the Unified Model (UM) using the Micro-genetic algorithm (Micro-GA). We need the optimization of microphysics in UM. Because, Microphysics in the Numerical Weather Prediction (NWP) model is important to Quantitative Precipitation Forecasting (QPF). The Micro-GA searches for optimal parameters on the basis of fitness function. The five parameters are chosen. The target parameters include x1, x2 related to raindrop size distribution, Cloud-rain correlation coefficient, Surface droplet number and Droplet taper height. The fitness function is based on the skill score that is BIAS and Critical Successive Index (CSI). An interface between UM and Micro-GA is developed and applied to three precipitation cases in Korea. The cases are (ⅰ) heavy rainfall in the Southern area because of typhoon NAKRI, (ⅱ) heavy rainfall in the Youngdong area, and (ⅲ) heavy rainfall in the Seoul metropolitan area. When the optimized result is compared to the control result (using the UM default value, CNTL), the optimized result leads to improvements in precipitation forecast, especially for heavy rainfall of the late forecast time. Also, we analyze the skill score of precipitation forecasts in terms of various thresholds of CNTL, Optimized result, and experiments on each optimized parameter for five parameters. Generally, the improvement is maximized when the five optimized parameters are used simultaneously. Therefore, this study demonstrates the ability to improve Korean precipitation forecasts by optimizing microphysics in UM.
NASA Astrophysics Data System (ADS)
Zainal Ariffin, S.; Razlan, A.; Ali, M. Mohd; Efendee, A. M.; Rahman, M. M.
2018-03-01
Background/Objectives: The paper discusses about the optimum cutting parameters with coolant techniques condition (1.0 mm nozzle orifice, wet and dry) to optimize surface roughness, temperature and tool wear in the machining process based on the selected setting parameters. The selected cutting parameters for this study were the cutting speed, feed rate, depth of cut and coolant techniques condition. Methods/Statistical Analysis Experiments were conducted and investigated based on Design of Experiment (DOE) with Response Surface Method. The research of the aggressive machining process on aluminum alloy (A319) for automotive applications is an effort to understand the machining concept, which widely used in a variety of manufacturing industries especially in the automotive industry. Findings: The results show that the dominant failure mode is the surface roughness, temperature and tool wear when using 1.0 mm nozzle orifice, increases during machining and also can be alternative minimize built up edge of the A319. The exploration for surface roughness, productivity and the optimization of cutting speed in the technical and commercial aspects of the manufacturing processes of A319 are discussed in automotive components industries for further work Applications/Improvements: The research result also beneficial in minimizing the costs incurred and improving productivity of manufacturing firms. According to the mathematical model and equations, generated by CCD based RSM, experiments were performed and cutting coolant condition technique using size nozzle can reduces tool wear, surface roughness and temperature was obtained. Results have been analyzed and optimization has been carried out for selecting cutting parameters, shows that the effectiveness and efficiency of the system can be identified and helps to solve potential problems.
Sensitivity analysis of infectious disease models: methods, advances and their application
Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.
2013-01-01
Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497
Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying
2018-01-01
The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation with the spatial heterogeneity under the three vegetation types. According to the temporal and spatial heterogeneity of the optimal values, the parameters of the BIOME-BGC model could be classified in order to adopt different parameter strategies in practical application. The conclusion could help to deeply understand the parameters and the optimal values of the ecological process models, and provide a way or reference for obtaining the reasonable values of parameters in models application.
Technical Parameters Modeling of a Gas Probe Foaming Using an Active Experimental Type Research
NASA Astrophysics Data System (ADS)
Tîtu, A. M.; Sandu, A. V.; Pop, A. B.; Ceocea, C.; Tîtu, S.
2018-06-01
The present paper deals with a current and complex topic, namely - a technical problem solving regarding the modeling and then optimization of some technical parameters related to the natural gas extraction process. The study subject is to optimize the gas probe sputtering using experimental research methods and data processing by regular probe intervention with different sputtering agents. This procedure makes that the hydrostatic pressure to be reduced by the foam formation from the water deposit and the scrubbing agent which can be removed from the surface by the produced gas flow. The probe production data was analyzed and the so-called candidate for the research itself emerged. This is an extremely complex study and it was carried out on the field works, finding that due to the severe gas field depletion the wells flow decreases and the start of their loading with deposit water, was registered. It was required the regular wells foaming, to optimize the daily production flow and the disposal of the wellbore accumulated water. In order to analyze the process of natural gas production, the factorial experiment and other methods were used. The reason of this choice is that the method can offer very good research results with a small number of experimental data. Finally, through this study the extraction process problems were identified by analyzing and optimizing the technical parameters, which led to a quality improvement of the extraction process.
Constraining neutron guide optimizations with phase-space considerations
NASA Astrophysics Data System (ADS)
Bertelsen, Mads; Lefmann, Kim
2016-09-01
We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.
Real-time parameter optimization based on neural network for smart injection molding
NASA Astrophysics Data System (ADS)
Lee, H.; Liau, Y.; Ryu, K.
2018-03-01
The manufacturing industry has been facing several challenges, including sustainability, performance and quality of production. Manufacturers attempt to enhance the competitiveness of companies by implementing CPS (Cyber-Physical Systems) through the convergence of IoT(Internet of Things) and ICT(Information & Communication Technology) in the manufacturing process level. Injection molding process has a short cycle time and high productivity. This features have been making it suitable for mass production. In addition, this process is used to produce precise parts in various industry fields such as automobiles, optics and medical devices. Injection molding process has a mixture of discrete and continuous variables. In order to optimized the quality, variables that is generated in the injection molding process must be considered. Furthermore, Optimal parameter setting is time-consuming work to predict the optimum quality of the product. Since the process parameter cannot be easily corrected during the process execution. In this research, we propose a neural network based real-time process parameter optimization methodology that sets optimal process parameters by using mold data, molding machine data, and response data. This paper is expected to have academic contribution as a novel study of parameter optimization during production compare with pre - production parameter optimization in typical studies.
Reproducibility of Heart Rate Variability Is Parameter and Sleep Stage Dependent.
Herzig, David; Eser, Prisca; Omlin, Ximena; Riener, Robert; Wilhelm, Matthias; Achermann, Peter
2017-01-01
Objective: Measurements of heart rate variability (HRV) during sleep have become increasingly popular as sleep could provide an optimal state for HRV assessments. While sleep stages have been reported to affect HRV, the effect of sleep stages on the variance of HRV parameters were hardly investigated. We aimed to assess the variance of HRV parameters during the different sleep stages. Further, we tested the accuracy of an algorithm using HRV to identify a 5-min segment within an episode of slow wave sleep (SWS, deep sleep). Methods: Polysomnographic (PSG) sleep recordings of 3 nights of 15 healthy young males were analyzed. Sleep was scored according to conventional criteria. HRV parameters of consecutive 5-min segments were analyzed within the different sleep stages. The total variance of HRV parameters was partitioned into between-subjects variance, between-nights variance, and between-segments variance and compared between the different sleep stages. Intra-class correlation coefficients of all HRV parameters were calculated for all sleep stages. To identify an SWS segment based on HRV, Pearson correlation coefficients of consecutive R-R intervals (rRR) of moving 5-min windows (20-s steps). The linear trend was removed from the rRR time series and the first segment with rRR values 0.1 units below the mean rRR for at least 10 min was identified. A 5-min segment was placed in the middle of such an identified segment and the corresponding sleep stage was used to assess the accuracy of the algorithm. Results: Good reproducibility within and across nights was found for heart rate in all sleep stages and for high frequency (HF) power in SWS. Reproducibility of low frequency (LF) power and of LF/HF was poor in all sleep stages. Of all the 5-min segments selected based on HRV data, 87% were accurately located within SWS. Conclusions: SWS, a stable state that, in contrast to waking, is unaffected by internal and external factors, is a reproducible state that allows reliable determination of heart rate, and HF power, and can satisfactorily be detected based on R-R intervals, without the need of full PSG. Sleep may not be an optimal condition to assess LF power and LF/HF power ratio.
Reproducibility of Heart Rate Variability Is Parameter and Sleep Stage Dependent
Herzig, David; Eser, Prisca; Omlin, Ximena; Riener, Robert; Wilhelm, Matthias; Achermann, Peter
2018-01-01
Objective: Measurements of heart rate variability (HRV) during sleep have become increasingly popular as sleep could provide an optimal state for HRV assessments. While sleep stages have been reported to affect HRV, the effect of sleep stages on the variance of HRV parameters were hardly investigated. We aimed to assess the variance of HRV parameters during the different sleep stages. Further, we tested the accuracy of an algorithm using HRV to identify a 5-min segment within an episode of slow wave sleep (SWS, deep sleep). Methods: Polysomnographic (PSG) sleep recordings of 3 nights of 15 healthy young males were analyzed. Sleep was scored according to conventional criteria. HRV parameters of consecutive 5-min segments were analyzed within the different sleep stages. The total variance of HRV parameters was partitioned into between-subjects variance, between-nights variance, and between-segments variance and compared between the different sleep stages. Intra-class correlation coefficients of all HRV parameters were calculated for all sleep stages. To identify an SWS segment based on HRV, Pearson correlation coefficients of consecutive R-R intervals (rRR) of moving 5-min windows (20-s steps). The linear trend was removed from the rRR time series and the first segment with rRR values 0.1 units below the mean rRR for at least 10 min was identified. A 5-min segment was placed in the middle of such an identified segment and the corresponding sleep stage was used to assess the accuracy of the algorithm. Results: Good reproducibility within and across nights was found for heart rate in all sleep stages and for high frequency (HF) power in SWS. Reproducibility of low frequency (LF) power and of LF/HF was poor in all sleep stages. Of all the 5-min segments selected based on HRV data, 87% were accurately located within SWS. Conclusions: SWS, a stable state that, in contrast to waking, is unaffected by internal and external factors, is a reproducible state that allows reliable determination of heart rate, and HF power, and can satisfactorily be detected based on R-R intervals, without the need of full PSG. Sleep may not be an optimal condition to assess LF power and LF/HF power ratio. PMID:29367845
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten
2016-06-08
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less
NASA Astrophysics Data System (ADS)
Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang
2016-06-01
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
A numerical identifiability test for state-space models--application to optimal experimental design.
Hidalgo, M E; Ayesa, E
2001-01-01
This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.
NASA Astrophysics Data System (ADS)
Sun, Guodong; Mu, Mu
2017-05-01
An important source of uncertainty, which causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. Therefore, finding a subset among numerous physical parameters in numerical models in the atmospheric and oceanic sciences, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach in China. The results imply that nonlinear interactions among parameters play a key role in the identification of sensitive parameters in arid and semi-arid regions of China compared to those in northern, northeastern, and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.
Wu, Yan; Xiao, Xin-yu; Ge, Fa-huan
2012-02-01
To study the extraction conditions of Sapindus mukorossi oil by Supercritical CO2 Extraction and identify its components. Optimized SFE-CO2 Extraction by response surface methodology and used GC-MS to analysie Sapindus mukorossi oil compounds. Established the model of an equation for the extraction rate of Sapindus mukorossi oil by Supercritical CO2 Extraction, and the optimal parameters for the Supercritical CO2 Extraction determined by the equation were: the extraction pressure was 30 MPa, temperature was 40 degrees C; The separation I pressure was 14 MPa, temperature was 45 degrees C; The separation II pressure was 6 MPa, temperature was 40 degrees C; The extraction time was 60 min and the extraction rate of Sapindus mukorossi oil of 17.58%. 22 main compounds of Sapindus mukorossi oil extracted by supercritical CO2 were identified by GC-MS, unsaturated fatty acids were 86.59%. This process is reliable, safe and with simple operation, and can be used for the extraction of Sapindus mukorossi oil.
Guo, Song; Liu, Chunhua; Zhou, Peng; Li, Yanling
2016-01-01
Tyrosine sulfation is one of the ubiquitous protein posttranslational modifications, where some sulfate groups are added to the tyrosine residues. It plays significant roles in various physiological processes in eukaryotic cells. To explore the molecular mechanism of tyrosine sulfation, one of the prerequisites is to correctly identify possible protein tyrosine sulfation residues. In this paper, a novel method was presented to predict protein tyrosine sulfation residues from primary sequences. By means of informative feature construction and elaborate feature selection and parameter optimization scheme, the proposed predictor achieved promising results and outperformed many other state-of-the-art predictors. Using the optimal features subset, the proposed method achieved mean MCC of 94.41% on the benchmark dataset, and a MCC of 90.09% on the independent dataset. The experimental performance indicated that our new proposed method could be effective in identifying the important protein posttranslational modifications and the feature selection scheme would be powerful in protein functional residues prediction research fields.
Liu, Chunhua; Zhou, Peng; Li, Yanling
2016-01-01
Tyrosine sulfation is one of the ubiquitous protein posttranslational modifications, where some sulfate groups are added to the tyrosine residues. It plays significant roles in various physiological processes in eukaryotic cells. To explore the molecular mechanism of tyrosine sulfation, one of the prerequisites is to correctly identify possible protein tyrosine sulfation residues. In this paper, a novel method was presented to predict protein tyrosine sulfation residues from primary sequences. By means of informative feature construction and elaborate feature selection and parameter optimization scheme, the proposed predictor achieved promising results and outperformed many other state-of-the-art predictors. Using the optimal features subset, the proposed method achieved mean MCC of 94.41% on the benchmark dataset, and a MCC of 90.09% on the independent dataset. The experimental performance indicated that our new proposed method could be effective in identifying the important protein posttranslational modifications and the feature selection scheme would be powerful in protein functional residues prediction research fields. PMID:27034949
Guo, Tianruo; Yang, Chih Yu; Tsai, David; Muralidharan, Madhuvanthi; Suaning, Gregg J.; Morley, John W.; Dokos, Socrates; Lovell, Nigel H.
2018-01-01
The ability for visual prostheses to preferentially activate functionally-distinct retinal ganglion cells (RGCs) is important for improving visual perception. This study investigates the use of high frequency stimulation (HFS) to elicit RGC activation, using a closed-loop algorithm to search for optimal stimulation parameters for preferential ON and OFF RGC activation, resembling natural physiological neural encoding in response to visual stimuli. We evaluated the performance of a wide range of electrical stimulation amplitudes and frequencies on RGC responses in vitro using murine retinal preparations. It was possible to preferentially excite either ON or OFF RGCs by adjusting amplitudes and frequencies in HFS. ON RGCs can be preferentially activated at relatively higher stimulation amplitudes (>150 μA) and frequencies (2–6.25 kHz) while OFF RGCs are activated by lower stimulation amplitudes (40–90 μA) across all tested frequencies (1–6.25 kHz). These stimuli also showed great promise in eliciting RGC responses that parallel natural RGC encoding: ON RGCs exhibited an increase in spiking activity during electrical stimulation while OFF RGCs exhibited decreased spiking activity, given the same stimulation amplitude. In conjunction with the in vitro studies, in silico simulations indicated that optimal HFS parameters could be rapidly identified in practice, whilst sampling spiking activity of relevant neuronal subtypes. This closed-loop approach represents a step forward in modulating stimulation parameters to achieve appropriate neural encoding in retinal prostheses, advancing control over RGC subtypes activated by electrical stimulation. PMID:29615857
Process Parameters Optimization in Single Point Incremental Forming
NASA Astrophysics Data System (ADS)
Gulati, Vishal; Aryal, Ashmin; Katyal, Puneet; Goswami, Amitesh
2016-04-01
This work aims to optimize the formability and surface roughness of parts formed by the single-point incremental forming process for an Aluminium-6063 alloy. The tests are based on Taguchi's L18 orthogonal array selected on the basis of DOF. The tests have been carried out on vertical machining center (DMC70V); using CAD/CAM software (SolidWorks V5/MasterCAM). Two levels of tool radius, three levels of sheet thickness, step size, tool rotational speed, feed rate and lubrication have been considered as the input process parameters. Wall angle and surface roughness have been considered process responses. The influential process parameters for the formability and surface roughness have been identified with the help of statistical tool (response table, main effect plot and ANOVA). The parameter that has the utmost influence on formability and surface roughness is lubrication. In the case of formability, lubrication followed by the tool rotational speed, feed rate, sheet thickness, step size and tool radius have the influence in descending order. Whereas in surface roughness, lubrication followed by feed rate, step size, tool radius, sheet thickness and tool rotational speed have the influence in descending order. The predicted optimal values for the wall angle and surface roughness are found to be 88.29° and 1.03225 µm. The confirmation experiments were conducted thrice and the value of wall angle and surface roughness were found to be 85.76° and 1.15 µm respectively.
Seizure Control in a Computational Model Using a Reinforcement Learning Stimulation Paradigm.
Nagaraj, Vivek; Lamperski, Andrew; Netoff, Theoden I
2017-11-01
Neuromodulation technologies such as vagus nerve stimulation and deep brain stimulation, have shown some efficacy in controlling seizures in medically intractable patients. However, inherent patient-to-patient variability of seizure disorders leads to a wide range of therapeutic efficacy. A patient specific approach to determining stimulation parameters may lead to increased therapeutic efficacy while minimizing stimulation energy and side effects. This paper presents a reinforcement learning algorithm that optimizes stimulation frequency for controlling seizures with minimum stimulation energy. We apply our method to a computational model called the epileptor. The epileptor model simulates inter-ictal and ictal local field potential data. In order to apply reinforcement learning to the Epileptor, we introduce a specialized reward function and state-space discretization. With the reward function and discretization fixed, we test the effectiveness of the temporal difference reinforcement learning algorithm (TD(0)). For periodic pulsatile stimulation, we derive a relation that describes, for any stimulation frequency, the minimal pulse amplitude required to suppress seizures. The TD(0) algorithm is able to identify parameters that control seizures quickly. Additionally, our results show that the TD(0) algorithm refines the stimulation frequency to minimize stimulation energy thereby converging to optimal parameters reliably. An advantage of the TD(0) algorithm is that it is adaptive so that the parameters necessary to control the seizures can change over time. We show that the algorithm can converge on the optimal solution in simulation with slow and fast inter-seizure intervals.
Optimization of Gas Metal Arc Welding Process Parameters
NASA Astrophysics Data System (ADS)
Kumar, Amit; Khurana, M. K.; Yadav, Pradeep K.
2016-09-01
This study presents the application of Taguchi method combined with grey relational analysis to optimize the process parameters of gas metal arc welding (GMAW) of AISI 1020 carbon steels for multiple quality characteristics (bead width, bead height, weld penetration and heat affected zone). An orthogonal array of L9 has been implemented to fabrication of joints. The experiments have been conducted according to the combination of voltage (V), current (A) and welding speed (Ws). The results revealed that the welding speed is most significant process parameter. By analyzing the grey relational grades, optimal parameters are obtained and significant factors are known using ANOVA analysis. The welding parameters such as speed, welding current and voltage have been optimized for material AISI 1020 using GMAW process. To fortify the robustness of experimental design, a confirmation test was performed at selected optimal process parameter setting. Observations from this method may be useful for automotive sub-assemblies, shipbuilding and vessel fabricators and operators to obtain optimal welding conditions.
On 4-degree-of-freedom biodynamic models of seated occupants: Lumped-parameter modeling
NASA Astrophysics Data System (ADS)
Bai, Xian-Xu; Xu, Shi-Xu; Cheng, Wei; Qian, Li-Jun
2017-08-01
It is useful to develop an effective biodynamic model of seated human occupants to help understand the human vibration exposure to transportation vehicle vibrations and to help design and improve the anti-vibration devices and/or test dummies. This study proposed and demonstrated a methodology for systematically identifying the best configuration or structure of a 4-degree-of-freedom (4DOF) human vibration model and for its parameter identification. First, an equivalent simplification expression for the models was made. Second, all of the possible 23 structural configurations of the models were identified. Third, each of them was calibrated using the frequency response functions recommended in a biodynamic standard. An improved version of non-dominated sorting genetic algorithm (NSGA-II) based on Pareto optimization principle was used to determine the model parameters. Finally, a model evaluation criterion proposed in this study was used to assess the models and to identify the best one, which was based on both the goodness of curve fits and comprehensive goodness of the fits. The identified top configurations were better than those reported in the literature. This methodology may also be extended and used to develop the models with other DOFs.
Simple method for quick estimation of aquifer hydrogeological parameters
NASA Astrophysics Data System (ADS)
Ma, C.; Li, Y. Y.
2017-08-01
Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.
Hybrid Quantum-Classical Approach to Quantum Optimal Control.
Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu
2017-04-14
A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.
New approaches to optimization in aerospace conceptual design
NASA Technical Reports Server (NTRS)
Gage, Peter J.
1995-01-01
Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.
Optimal topologies for maximizing network transmission capacity
NASA Astrophysics Data System (ADS)
Chen, Zhenhao; Wu, Jiajing; Rong, Zhihai; Tse, Chi K.
2018-04-01
It has been widely demonstrated that the structure of a network is a major factor that affects its traffic dynamics. In this work, we try to identify the optimal topologies for maximizing the network transmission capacity, as well as to build a clear relationship between structural features of a network and the transmission performance in terms of traffic delivery. We propose an approach for designing optimal network topologies against traffic congestion by link rewiring and apply them on the Barabási-Albert scale-free, static scale-free and Internet Autonomous System-level networks. Furthermore, we analyze the optimized networks using complex network parameters that characterize the structure of networks, and our simulation results suggest that an optimal network for traffic transmission is more likely to have a core-periphery structure. However, assortative mixing and the rich-club phenomenon may have negative impacts on network performance. Based on the observations of the optimized networks, we propose an efficient method to improve the transmission capacity of large-scale networks.
NASA Astrophysics Data System (ADS)
Zhang, Yichen; Li, Zhengyu; Zhao, Yijia; Yu, Song; Guo, Hong
2017-02-01
We analyze the security of the two-way continuous-variable quantum key distribution protocol in reverse reconciliation against general two-mode attacks, which represent all accessible attacks at fixed channel parameters. Rather than against one specific attack model, the expression of secret key rates of the two-way protocol are derived against all accessible attack models. It is found that there is an optimal two-mode attack to minimize the performance of the protocol in terms of both secret key rates and maximal transmission distances. We identify the optimal two-mode attack, give the specific attack model of the optimal two-mode attack and show the performance of the two-way protocol against the optimal two-mode attack. Even under the optimal two-mode attack, the performances of two-way protocol are still better than the corresponding one-way protocol, which shows the advantage of making double use of the quantum channel and the potential of long-distance secure communication using a two-way protocol.
NASA Astrophysics Data System (ADS)
Gao, Chuan; Zhang, Rong-Hua; Wu, Xinrong; Sun, Jichang
2018-04-01
Large biases exist in real-time ENSO prediction, which can be attributed to uncertainties in initial conditions and model parameters. Previously, a 4D variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer ( T e), which is empirically and explicitly related to sea level (SL) variation. The strength of the thermocline effect on SST (referred to simply as "the thermocline effect") is represented by an introduced parameter, α Te. A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having their initial condition optimized only, and having their initial condition plus this additional model parameter optimized, are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameters and initial conditions together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.
Using a 4D-Variational Method to Optimize Model Parameters in an Intermediate Coupled Model of ENSO
NASA Astrophysics Data System (ADS)
Gao, C.; Zhang, R. H.
2017-12-01
Large biases exist in real-time ENSO prediction, which is attributed to uncertainties in initial conditions and model parameters. Previously, a four dimentional variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer (Te), which is empirically and explicitly related to sea level (SL) variation, written as Te=αTe×FTe (SL). The introduced parameter, αTe, represents the strength of the thermocline effect on sea surface temperature (SST; referred as the thermocline effect). A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having initial condition optimized only and having initial condition plus this additional model parameter optimized both are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameter and initial condition together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Fractal attractors in economic growth models with random pollution externalities
NASA Astrophysics Data System (ADS)
La Torre, Davide; Marsiglio, Simone; Privileggi, Fabio
2018-05-01
We analyze a discrete time two-sector economic growth model where the production technologies in the final and human capital sectors are affected by random shocks both directly (via productivity and factor shares) and indirectly (via a pollution externality). We determine the optimal dynamics in the decentralized economy and show how these dynamics can be described in terms of a two-dimensional affine iterated function system with probability. This allows us to identify a suitable parameter configuration capable of generating exactly the classical Barnsley's fern as the attractor of the log-linearized optimal dynamical system.
NASA Technical Reports Server (NTRS)
Clark, R. T.; Mccallister, R. D.
1982-01-01
The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.
Hybrid computer optimization of systems with random parameters
NASA Technical Reports Server (NTRS)
White, R. C., Jr.
1972-01-01
A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.
Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters
NASA Astrophysics Data System (ADS)
Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi
A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.
NASA Astrophysics Data System (ADS)
Ginting, E.; Tambunanand, M. M.; Syahputri, K.
2018-02-01
Evolutionary Operation Methods (EVOP) is a method that is designed used in the process of running or operating routinely in the company to enables high productivity. Quality is one of the critical factors for a company to win the competition. Because of these conditions, the research for products quality has been done by gathering the production data of the company and make a direct observation to the factory floor especially the drying department to identify the problem which is the high water content in the mosquito incense coil. PT.X which is producing mosquito coils attempted to reduce product defects caused by the inaccuracy of operating conditions. One of the parameters of good quality insect repellent that is water content, that if the moisture content is too high then the product easy to mold and broken, and vice versa if it is too low the products are easily broken and burn shorter hours. Three factors that affect the value of the optimal water content, the stirring time, drying temperature and drying time. To obtain the required conditions Evolutionary Operation (EVOP) methods is used. Evolutionary Operation (EVOP) is used as an efficient technique for optimization of two or three variable experimental parameters using two-level factorial designs with center point. Optimal operating conditions in the experiment are stirring time performed for 20 minutes, drying temperature at 65°C, and drying time for 130 minutes. The results of the analysis based on the method of Evolutionary Operation (EVOP) value is the optimum water content of 6.90%, which indicates the value has approached the optimal in a production plant that is 7%.
Ashengroph, Morahem; Ababaf, Sajad
2014-12-01
Microbial caffeine removal is a green solution for treatment of caffeinated products and agro-industrial effluents. We directed this investigation to optimizing a bio-decaffeination process with growing cultures of Pseudomonas pseudoalcaligenes through Taguchi methodology which is a structured statistical approach that can be lowered variations in a process through Design of Experiments (DOE). Five parameters, i.e. initial fructose, tryptone, Zn(+2) ion and caffeine concentrations and also incubation time selected and an L16 orthogonal array was applied to design experiments with four 4-level factors and one 3-level factor (4(4) × 1(3)). Data analysis was performed using the statistical analysis of variance (ANOVA) method. Furthermore, the optimal conditions were determined by combining the optimal levels of the significant factors and verified by a confirming experiment. Measurement of residual caffeine concentration in the reaction mixture was performed using high-performance liquid chromatography (HPLC). Use of Taguchi methodology for optimization of design parameters resulted in about 86.14% reduction of caffeine in 48 h incubation when 5g/l fructose, 3 mM Zn(+2) ion and 4.5 g/l of caffeine are present in the designed media. Under the optimized conditions, the yield of degradation of caffeine (4.5 g/l) by the native strain of Pseudomonas pseudoalcaligenes TPS8 has been increased from 15.8% to 86.14% which is 5.4 fold higher than the normal yield. According to the experimental results, Taguchi methodology provides a powerful methodology for identifying the favorable parameters on caffeine removal using strain TPS8 which suggests that the approach also has potential application with similar strains to improve the yield of caffeine removal from caffeine containing solutions.
Estimating Colloidal Contact Model Parameters Using Quasi-Static Compression Simulations.
Bürger, Vincent; Briesen, Heiko
2016-10-05
For colloidal particles interacting in suspensions, clusters, or gels, contact models should attempt to include all physical phenomena experimentally observed. One critical point when formulating a contact model is to ensure that the interaction parameters can be easily obtained from experiments. Experimental determinations of contact parameters for particles either are based on bulk measurements for simulations on the macroscopic scale or require elaborate setups for obtaining tangential parameters such as using atomic force microscopy. However, on the colloidal scale, a simple method is required to obtain all interaction parameters simultaneously. This work demonstrates that quasi-static compression of a fractal-like particle network provides all the necessary information to obtain particle interaction parameters using a simple spring-based contact model. These springs provide resistances against all degrees of freedom associated with two-particle interactions, and include critical forces or moments where such springs break, indicating a bond-breakage event. A position-based cost function is introduced to show the identifiability of the two-particle contact parameters, and a discrete, nonlinear, and non-gradient-based global optimization method (simplex with simulated annealing, SIMPSA) is used to minimize the cost function calculated from deviations of particle positions. Results show that, in principle, all necessary contact parameters for an arbitrary particle network can be identified, although numerical efficiency as well as experimental noise must be addressed when applying this method. Such an approach lays the groundwork for identifying particle-contact parameters from a position-based particle analysis for a colloidal system using just one experiment. Spring constants also directly influence the time step of the discrete-element method, and a detailed knowledge of all necessary interaction parameters will help to improve the efficiency of colloidal particle simulations.
An improved CS-LSSVM algorithm-based fault pattern recognition of ship power equipments.
Yang, Yifei; Tan, Minjia; Dai, Yuewei
2017-01-01
A ship power equipments' fault monitoring signal usually provides few samples and the data's feature is non-linear in practical situation. This paper adopts the method of the least squares support vector machine (LSSVM) to deal with the problem of fault pattern identification in the case of small sample data. Meanwhile, in order to avoid involving a local extremum and poor convergence precision which are induced by optimizing the kernel function parameter and penalty factor of LSSVM, an improved Cuckoo Search (CS) algorithm is proposed for the purpose of parameter optimization. Based on the dynamic adaptive strategy, the newly proposed algorithm improves the recognition probability and the searching step length, which can effectively solve the problems of slow searching speed and low calculation accuracy of the CS algorithm. A benchmark example demonstrates that the CS-LSSVM algorithm can accurately and effectively identify the fault pattern types of ship power equipments.
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)
2003-01-01
A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.
Identification of Synchronous Machine Stability - Parameters: AN On-Line Time-Domain Approach.
NASA Astrophysics Data System (ADS)
Le, Loc Xuan
1987-09-01
A time-domain modeling approach is described which enables the stability-study parameters of the synchronous machine to be determined directly from input-output data measured at the terminals of the machine operating under normal conditions. The transient responses due to system perturbations are used to identify the parameters of the equivalent circuit models. The described models are verified by comparing their responses with the machine responses generated from the transient stability models of a small three-generator multi-bus power system and of a single -machine infinite-bus power network. The least-squares method is used for the solution of the model parameters. As a precaution against ill-conditioned problems, the singular value decomposition (SVD) is employed for its inherent numerical stability. In order to identify the equivalent-circuit parameters uniquely, the solution of a linear optimization problem with non-linear constraints is required. Here, the SVD appears to offer a simple solution to this otherwise difficult problem. Furthermore, the SVD yields solutions with small bias and, therefore, physically meaningful parameters even in the presence of noise in the data. The question concerning the need for a more advanced model of the synchronous machine which describes subtransient and even sub-subtransient behavior is dealt with sensibly by the concept of condition number. The concept provides a quantitative measure for determining whether such an advanced model is indeed necessary. Finally, the recursive SVD algorithm is described for real-time parameter identification and tracking of slowly time-variant parameters. The algorithm is applied to identify the dynamic equivalent power system model.
Detecting influential observations in nonlinear regression modeling of groundwater flow
Yager, Richard M.
1998-01-01
Nonlinear regression is used to estimate optimal parameter values in models of groundwater flow to ensure that differences between predicted and observed heads and flows do not result from nonoptimal parameter values. Parameter estimates can be affected, however, by observations that disproportionately influence the regression, such as outliers that exert undue leverage on the objective function. Certain statistics developed for linear regression can be used to detect influential observations in nonlinear regression if the models are approximately linear. This paper discusses the application of Cook's D, which measures the effect of omitting a single observation on a set of estimated parameter values, and the statistical parameter DFBETAS, which quantifies the influence of an observation on each parameter. The influence statistics were used to (1) identify the influential observations in the calibration of a three-dimensional, groundwater flow model of a fractured-rock aquifer through nonlinear regression, and (2) quantify the effect of omitting influential observations on the set of estimated parameter values. Comparison of the spatial distribution of Cook's D with plots of model sensitivity shows that influential observations correspond to areas where the model heads are most sensitive to certain parameters, and where predicted groundwater flow rates are largest. Five of the six discharge observations were identified as influential, indicating that reliable measurements of groundwater flow rates are valuable data in model calibration. DFBETAS are computed and examined for an alternative model of the aquifer system to identify a parameterization error in the model design that resulted in overestimation of the effect of anisotropy on horizontal hydraulic conductivity.
NASA Astrophysics Data System (ADS)
Singh, R.; Verma, H. K.
2013-12-01
This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.
Ghafouri, H R; Mosharaf-Dehkordi, M; Afzalan, B
2017-07-01
A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
NASA Astrophysics Data System (ADS)
Tsutsui, Shigeyosi
This paper proposes an aggregation pheromone system (APS) for solving real-parameter optimization problems using the collective behavior of individuals which communicate using aggregation pheromones. APS was tested on several test functions used in evolutionary computation. The results showed APS could solve real-parameter optimization problems fairly well. The sensitivity analysis of control parameters of APS is also studied.
NASA Astrophysics Data System (ADS)
Utama, D. N.; Ani, N.; Iqbal, M. M.
2018-03-01
Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.
NASA Astrophysics Data System (ADS)
Miyauchi, T.; Machimura, T.
2013-12-01
In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the field survey) were weighted for priority. We compared some gradient-based global optimization methods of Dakota starting with the default parameters of Biome-BGC. In the result of sensitive analysis, carbon allocation parameters between coarse root and leaf, between stem and leaf, and SLA had high contribution on both leaf and woody biomass changes. These parameters were selected to be optimized. The measured leaf, above- and below-ground woody biomass carbon density at the last year were 0.22, 1.81 and 0.86 kgC m-2, respectively, whereas those simulated in the non-optimized control case using all default parameters were 0.12, 2.26 and 0.52 kgC m-2, respectively. After optimizing the parameters, the simulated values were improved to 0.19, 1.81 and 0.86 kgC m-2, respectively. The coliny global optimization method gave the better fitness than efficient global and ncsu direct method. The optimized parameters showed the higher carbon allocation rates to coarse roots and leaves and the lower SLA than the default parameters, which were consistent to the general water physiological response in a dry climate. The simulation using the weighted object function resulted in the closer simulations to the measurements at the last year with the lower fitness during the previous years.
Optimizing Robinson Operator with Ant Colony Optimization As a Digital Image Edge Detection Method
NASA Astrophysics Data System (ADS)
Yanti Nasution, Tarida; Zarlis, Muhammad; K. M Nasution, Mahyuddin
2017-12-01
Edge detection serves to identify the boundaries of an object against a background of mutual overlap. One of the classic method for edge detection is operator Robinson. Operator Robinson produces a thin, not assertive and grey line edge. To overcome these deficiencies, the proposed improvements to edge detection method with the approach graph with Ant Colony Optimization algorithm. The repairs may be performed are thicken the edge and connect the edges cut off. Edge detection research aims to do optimization of operator Robinson with Ant Colony Optimization then compare the output and generated the inferred extent of Ant Colony Optimization can improve result of edge detection that has not been optimized and improve the accuracy of the results of Robinson edge detection. The parameters used in performance measurement of edge detection are morphology of the resulting edge line, MSE and PSNR. The result showed that Robinson and Ant Colony Optimization method produces images with a more assertive and thick edge. Ant Colony Optimization method is able to be used as a method for optimizing operator Robinson by improving the image result of Robinson detection average 16.77 % than classic Robinson result.
Benchmarking image fusion system design parameters
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2013-06-01
A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Preliminary tests of the electrostatic plasma accelerator
NASA Technical Reports Server (NTRS)
Aston, G.; Acker, T.
1990-01-01
This report describes the results of a program to verify an electrostatic plasma acceleration concept and to identify those parameters most important in optimizing an Electrostatic Plasma Accelerator (EPA) thruster based upon this thrust mechanism. Preliminary performance measurements of thrust, specific impulse and efficiency were obtained using a unique plasma exhaust momentum probe. Reliable EPA thruster operation was achieved using one power supply.
USDA-ARS?s Scientific Manuscript database
MALDI-TOF MS has been utilized as a reliable and rapid tool for microbial fingerprinting at the genus and species levels. Recently, there has been keen interest in using MALDI-TOF MS beyond the genus and species levels to rapidly identify antibiotic resistant strains of bacteria. The purpose of this...
Method for Household Refrigerators Efficiency Increasing
NASA Astrophysics Data System (ADS)
Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.
2017-11-01
The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, Alasdair; Thomsen, Edwin; Reed, David
2016-04-20
A chemistry agnostic cost performance model is described for a nonaqueous flow battery. The model predicts flow battery performance by estimating the active reaction zone thickness at each electrode as a function of current density, state of charge, and flow rate using measured data for electrode kinetics, electrolyte conductivity, and electrode-specific surface area. Validation of the model is conducted using a 4kW stack data at various current densities and flow rates. This model is used to estimate the performance of a nonaqueous flow battery with electrode and electrolyte properties used from the literature. The optimized cost for this system ismore » estimated for various power and energy levels using component costs provided by vendors. The model allows optimization of design parameters such as electrode thickness, area, flow path design, and operating parameters such as power density, flow rate, and operating SOC range for various application duty cycles. A parametric analysis is done to identify components and electrode/electrolyte properties with the highest impact on system cost for various application durations. A pathway to 100$kWh -1 for the storage system is identified.« less
NASA Astrophysics Data System (ADS)
Hai-yang, Zhao; Min-qiang, Xu; Jin-dong, Wang; Yong-bo, Li
2015-05-01
In order to improve the accuracy of dynamics response simulation for mechanism with joint clearance, a parameter optimization method for planar joint clearance contact force model was presented in this paper, and the optimized parameters were applied to the dynamics response simulation for mechanism with oversized joint clearance fault. By studying the effect of increased clearance on the parameters of joint clearance contact force model, the relation of model parameters between different clearances was concluded. Then the dynamic equation of a two-stage reciprocating compressor with four joint clearances was developed using Lagrange method, and a multi-body dynamic model built in ADAMS software was used to solve this equation. To obtain a simulated dynamic response much closer to that of experimental tests, the parameters of joint clearance model, instead of using the designed values, were optimized by genetic algorithms approach. Finally, the optimized parameters were applied to simulate the dynamics response of model with oversized joint clearance fault according to the concluded parameter relation. The dynamics response of experimental test verified the effectiveness of this application.
Fisz, Jacek J
2006-12-07
The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.
Guaranteed convergence of the Hough transform
NASA Astrophysics Data System (ADS)
Soffer, Menashe; Kiryati, Nahum
1995-01-01
The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.
The ESSENCE Supernova Survey: Survey Optimization, Observations, and Supernova Photometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miknaitis, Gajus; Pignata, G.; Rest, A.
We describe the implementation and optimization of the ESSENCE supernova survey, which we have undertaken to measure the equation of state parameter of the dark energy. We present a method for optimizing the survey exposure times and cadence to maximize our sensitivity to the dark energy equation of state parameter w = P/{rho}c{sup 2} for a given fixed amount of telescope time. For our survey on the CTIO 4m telescope, measuring the luminosity distances and redshifts for supernovae at modest redshifts (z {approx} 0.5 {+-} 0.2) is optimal for determining w. We describe the data analysis pipeline based on usingmore » reliable and robust image subtraction to find supernovae automatically and in near real-time. Since making cosmological inferences with supernovae relies crucially on accurate measurement of their brightnesses, we describe our efforts to establish a thorough calibration of the CTIO 4m natural photometric system. In its first four years, ESSENCE has discovered and spectroscopically confirmed 102 type Ia SNe, at redshifts from 0.10 to 0.78, identified through an impartial, effective methodology for spectroscopic classification and redshift determination. We present the resulting light curves for the all type Ia supernovae found by ESSENCE and used in our measurement of w, presented in Wood-Vasey et al. (2007).« less
NASA Astrophysics Data System (ADS)
Sun, Guodong; Mu, Mu
2016-04-01
An important source of uncertainty, which then causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. There are many physical parameters in numerical models in the atmospheric and oceanic sciences, and it would cost a great deal to reduce uncertainties in all physical parameters. Therefore, finding a subset of these parameters, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach. The results imply that nonlinear interactions among parameters play a key role in the uncertainty of numerical simulations in arid and semi-arid regions of China compared to those in northern, northeastern and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Comparison of Optimal Design Methods in Inverse Problems
Banks, H. T.; Holm, Kathleen; Kappel, Franz
2011-01-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762
Multiresponse Optimization of Process Parameters in Turning of GFRP Using TOPSIS Method
Parida, Arun Kumar; Routara, Bharat Chandra
2014-01-01
Taguchi's design of experiment is utilized to optimize the process parameters in turning operation with dry environment. Three parameters, cutting speed (v), feed (f), and depth of cut (d), with three different levels are taken for the responses like material removal rate (MRR) and surface roughness (R a). The machining is conducted with Taguchi L9 orthogonal array, and based on the S/N analysis, the optimal process parameters for surface roughness and MRR are calculated separately. Considering the larger-the-better approach, optimal process parameters for material removal rate are cutting speed at level 3, feed at level 2, and depth of cut at level 3, that is, v 3-f 2-d 3. Similarly for surface roughness, considering smaller-the-better approach, the optimal process parameters are cutting speed at level 1, feed at level 1, and depth of cut at level 3, that is, v 1-f 1-d 3. Results of the main effects plot indicate that depth of cut is the most influencing parameter for MRR but cutting speed is the most influencing parameter for surface roughness and feed is found to be the least influencing parameter for both the responses. The confirmation test is conducted for both MRR and surface roughness separately. Finally, an attempt has been made to optimize the multiresponses using technique for order preference by similarity to ideal solution (TOPSIS) with Taguchi approach. PMID:27437503
Optimum Design of Forging Process Parameters and Preform Shape under Uncertainties
NASA Astrophysics Data System (ADS)
Repalle, Jalaja; Grandhi, Ramana V.
2004-06-01
Forging is a highly complex non-linear process that is vulnerable to various uncertainties, such as variations in billet geometry, die temperature, material properties, workpiece and forging equipment positional errors and process parameters. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion and production risk. Identifying the sources of uncertainties, quantifying and controlling them will reduce risk in the manufacturing environment, which will minimize the overall cost of production. In this paper, various uncertainties that affect forging tool life and preform design are identified, and their cumulative effect on the forging process is evaluated. Since the forging process simulation is computationally intensive, the response surface approach is used to reduce time by establishing a relationship between the system performance and the critical process design parameters. Variability in system performance due to randomness in the parameters is computed by applying Monte Carlo Simulations (MCS) on generated Response Surface Models (RSM). Finally, a Robust Methodology is developed to optimize forging process parameters and preform shape. The developed method is demonstrated by applying it to an axisymmetric H-cross section disk forging to improve the product quality and robustness.
Sagoo, Navjit; Valdes, Paul; Flecker, Rachel; Gregoire, Lauren J
2013-10-28
Geological data for the Early Eocene (56-47.8 Ma) indicate extensive global warming, with very warm temperatures at both poles. However, despite numerous attempts to simulate this warmth, there are remarkable data-model differences in the prediction of these polar surface temperatures, resulting in the so-called 'equable climate problem'. In this paper, for the first time an ensemble with a perturbed climate-sensitive model parameters approach has been applied to modelling the Early Eocene climate. We performed more than 100 simulations with perturbed physics parameters, and identified two simulations that have an optimal fit with the proxy data. We have simulated the warmth of the Early Eocene at 560 ppmv CO2, which is a much lower CO2 level than many other models. We investigate the changes in atmospheric circulation, cloud properties and ocean circulation that are common to these simulations and how they differ from the remaining simulations in order to understand what mechanisms contribute to the polar warming. The parameter set from one of the optimal Early Eocene simulations also produces a favourable fit for the last glacial maximum boundary climate and outperforms the control parameter set for the present day. Although this does not 'prove' that this model is correct, it is very encouraging that there is a parameter set that creates a climate model able to simulate well very different palaeoclimates and the present-day climate. Interestingly, to achieve the great warmth of the Early Eocene this version of the model does not have a strong future climate change Charney climate sensitivity. It produces a Charney climate sensitivity of 2.7(°)C, whereas the mean value of the 18 models in the IPCC Fourth Assessment Report (AR4) is 3.26(°)C±0.69(°)C. Thus, this value is within the range and below the mean of the models included in the AR4.
Sung, KiHoon; Choi, Young Eun; Lee, Kyu Chan
2017-06-01
This is a dosimetric study to identify a simple geometric indicator to discriminate patients who meet the selection criterion for heart-sparing radiotherapy (RT). The authors proposed a cardiac risk index (CRI), directly measurable from the CT images at the time of scanning. Treatment plans were regenerated using the CT data of 312 consecutive patients with left-sided breast cancer. Dosimetric analysis was performed to estimate the risk of cardiac mortality using cardiac dosimetric parameters, such as the relative heart volumes receiving ≥25 Gy (heart V 25 ). For each CT data set, in-field heart depth (HD) and in-field heart width (HW) were measured to generate the geometric parameters, including maximum HW (HW max ) and maximum HD (HD max ). Seven geometric parameters were evaluated as candidates for CRI. Receiver operating characteristic (ROC) curve analyses were used to examine the overall discriminatory power of the geometric parameters to select high-risk patients (heart V 25 ≥ 10%). Seventy-one high-risk (22.8%) and 241 low-risk patients (77.2%) were identified by dosimetric analysis. The geometric and dosimetric parameters were significantly higher in the high-risk group. Heart V 25 showed the strong positive correlations with all geometric parameters examined (r > 0.8, p < 0.001). The product of HD max and HW max (CRI) revealed the largest area under the curve (AUC) value (0.969) and maintained 100% sensitivity and 88% specificity at the optimal cut-off value of 14.58 cm 2 . Cardiac risk index proposed as a simple geometric indicator to select high-risk patients provides useful guidance for clinicians considering optimal implementation of heart-sparing RT. © 2016 The Royal Australian and New Zealand College of Radiologists.
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2015-10-01
Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.
Wang, Lan; Kim, Yongdai; Li, Runze
2013-10-01
We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis.
Dalwadi, Chintan; Patel, Gayatri
2016-01-01
The purpose of this study was to investigate Quality by Design (QbD) principle for the preparation of hydrogel products to prove both practicability and utility of executing QbD concept to hydrogel based controlled release systems. Product and process understanding will help in decreasing the variability of critical material and process parameters, which give quality product output and reduce the risk. This study includes the identification of the Quality Target Product Profiles (QTPPs) and Critical Quality Attributes (CQAs) from literature or preliminary studies. To identify and control the variability in process and material attributes, two tools of QbD was utilized, Quality Risk Management (QRM) and Experimental Design. Further, it helps to identify the effect of these attributes on CQAs. Potential risk factors were identified from fishbone diagram and screened by risk assessment and optimized by 3-level 2- factor experimental design with center points in triplicate, to analyze the precision of the target process. This optimized formulation was further characterized by gelling time, gelling temperature, rheological parameters, in-vitro biodegradation and in-vitro drug release. Design space was created using experimental design tool that gives the control space and working within this controlled space reduces all the failure modes below the risk level. In conclusion, QbD approach with QRM tool provides potent and effectual pyramid to enhance the quality into the hydrogel.
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION
Wang, Lan; Kim, Yongdai; Li, Runze
2014-01-01
We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis. PMID:24948843
Designing CAF-adjuvanted dry powder vaccines: spray drying preserves the adjuvant activity of CAF01.
Ingvarsson, Pall Thor; Schmidt, Signe Tandrup; Christensen, Dennis; Larsen, Niels Bent; Hinrichs, Wouter Leonardus Joseph; Andersen, Peter; Rantanen, Jukka; Nielsen, Hanne Mørck; Yang, Mingshi; Foged, Camilla
2013-05-10
Dry powder vaccine formulations are highly attractive due to improved storage stability and the possibility for particle engineering, as compared to liquid formulations. However, a prerequisite for formulating vaccines into dry formulations is that their physicochemical and adjuvant properties remain unchanged upon rehydration. Thus, we have identified and optimized the parameters of importance for the design of a spray dried powder formulation of the cationic liposomal adjuvant formulation 01 (CAF01) composed of dimethyldioctadecylammonium (DDA) bromide and trehalose 6,6'-dibehenate (TDB) via spray drying. The optimal excipient to stabilize CAF01 during spray drying and for the design of nanocomposite microparticles was identified among mannitol, lactose and trehalose. Trehalose and lactose were promising stabilizers with respect to preserving liposome size, as compared to mannitol. Trehalose and lactose were in the glassy state upon co-spray drying with the liposomes, whereas mannitol appeared crystalline, suggesting that the ability of the stabilizer to form a glassy matrix around the liposomes is one of the prerequisites for stabilization. Systematic studies on the effect of process parameters suggested that a fast drying rate is essential to avoid phase separation and lipid accumulation at the surface of the microparticles during spray drying. Finally, immunization studies in mice with CAF01 in combination with the tuberculosis antigen Ag85B-ESAT6-Rv2660c (H56) demonstrated that spray drying of CAF01 with trehalose under optimal processing conditions resulted in the preservation of the adjuvant activity in vivo. These data demonstrate the importance of liposome stabilization via optimization of formulation and processing conditions in the engineering of dry powder liposome formulations. Copyright © 2013 Elsevier B.V. All rights reserved.
Chawla, A; Mukherjee, S; Karthikeyan, B
2009-02-01
The objective of this study is to identify the dynamic material properties of human passive muscle tissues for the strain rates relevant to automobile crashes. A novel methodology involving genetic algorithm (GA) and finite element method is implemented to estimate the material parameters by inverse mapping the impact test data. Isolated unconfined impact tests for average strain rates ranging from 136 s(-1) to 262 s(-1) are performed on muscle tissues. Passive muscle tissues are modelled as isotropic, linear and viscoelastic material using three-element Zener model available in PAMCRASH(TM) explicit finite element software. In the GA based identification process, fitness values are calculated by comparing the estimated finite element forces with the measured experimental forces. Linear viscoelastic material parameters (bulk modulus, short term shear modulus and long term shear modulus) are thus identified at strain rates 136 s(-1), 183 s(-1) and 262 s(-1) for modelling muscles. Extracted optimal parameters from this study are comparable with reported parameters in literature. Bulk modulus and short term shear modulus are found to be more influential in predicting the stress-strain response than long term shear modulus for the considered strain rates. Variations within the set of parameters identified at different strain rates indicate the need for new or improved material model, which is capable of capturing the strain rate dependency of passive muscle response with single set of material parameters for wide range of strain rates.
Shimansky, Y P
2011-05-01
It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.
NASA Astrophysics Data System (ADS)
Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.
2016-10-01
Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.
Texture-based segmentation of temperate-zone woodland in panchromatic IKONOS imagery
NASA Astrophysics Data System (ADS)
Gagnon, Langis; Bugnet, Pierre; Cavayas, Francois
2003-08-01
We have performed a study to identify optimal texture parameters for woodland segmentation in a highly non-homogeneous urban area from a temperate-zone panchromatic IKONOS image. Texture images are produced with the sum- and difference-histograms depend on two parameters: window size f and displacement step p. The four texture features yielding the best discrimination between classes are the mean, contrast, correlation and standard deviation. The f-p combinations 17-1, 17-2, 35-1 and 35-2 are those which give the best performance, with an average classification rate of 90%.
An RBF-PSO based approach for modeling prostate cancer
NASA Astrophysics Data System (ADS)
Perracchione, Emma; Stura, Ilaria
2016-06-01
Prostate cancer is one of the most common cancers in men; it grows slowly and it could be diagnosed in an early stage by dosing the Prostate Specific Antigen (PSA). However, a relapse after the primary therapy could arise in 25 - 30% of cases and different growth characteristics of the new tumor are observed. In order to get a better understanding of the phenomenon, a two parameters growth model is considered. To estimate the parameters values identifying the disease risk level a novel approach, based on combining Particle Swarm Optimization (PSO) with meshfree interpolation methods, is proposed.
He, L; Huang, G H; Lu, H W
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.
Genetic Algorithm Optimizes Q-LAW Control Parameters
NASA Technical Reports Server (NTRS)
Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard
2008-01-01
A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.
New generation photoelectric converter structure optimization using nano-structured materials
NASA Astrophysics Data System (ADS)
Dronov, A.; Gavrilin, I.; Zheleznyakova, A.
2014-12-01
In present work the influence of anodizing process parameters on PAOT geometric parameters for optimizing and increasing ETA-cell efficiency was studied. During the calculations optimal geometrical parameters were obtained. Parameters such as anodizing current density, electrolyte composition and temperature, as well as the anodic oxidation process time were selected for this investigation. Using the optimized TiO2 photoelectrode layer with 3,6 μm porous layer thickness and pore diameter more than 80 nm the ETA-cell efficiency has been increased by 3 times comparing to not nanostructured TiO2 photoelectrode.
Bowman, Wesley A; Robar, James L; Sattarivand, Mike
2017-03-01
Stereoscopic x-ray image guided radiotherapy for lung tumors is often hindered by bone overlap and limited soft-tissue contrast. This study aims to evaluate the feasibility of dual-energy imaging techniques and to optimize parameters of the ExacTrac stereoscopic imaging system to enhance soft-tissue imaging for application to lung stereotactic body radiation therapy. Simulated spectra and a physical lung phantom were used to optimize filter material, thickness, tube potentials, and weighting factors to obtain bone subtracted dual-energy images. Spektr simulations were used to identify material in the atomic number range (3-83) based on a metric defined to separate spectra of high and low-energies. Both energies used the same filter due to time constraints of imaging in the presence of respiratory motion. The lung phantom contained bone, soft tissue, and tumor mimicking materials, and it was imaged with a filter thickness in the range of (0-0.7) mm and a kVp range of (60-80) for low energy and (120,140) for high energy. Optimal dual-energy weighting factors were obtained when the bone to soft-tissue contrast-to-noise ratio (CNR) was minimized. Optimal filter thickness and tube potential were achieved by maximizing tumor-to-background CNR. Using the optimized parameters, dual-energy images of an anthropomorphic Rando phantom with a spherical tumor mimicking material inserted in his lung were acquired and evaluated for bone subtraction and tumor contrast. Imaging dose was measured using the dual-energy technique with and without beam filtration and matched to that of a clinical conventional single energy technique. Tin was the material of choice for beam filtering providing the best energy separation, non-toxicity, and non-reactiveness. The best soft-tissue-weighted image in the lung phantom was obtained using 0.2 mm tin and (140, 60) kVp pair. Dual-energy images of the Rando phantom with the tin filter had noticeable improvement in bone elimination, tumor contrast, and noise content when compared to dual-energy imaging with no filtration. The surface dose was 0.52 mGy per each stereoscopic view for both clinical single energy technique and the dual-energy technique in both cases of with and without the tin filter. Dual-energy soft-tissue imaging is feasible without additional imaging dose using the ExacTrac stereoscopic imaging system with optimized acquisition parameters and no beam filtration. Addition of a single tin filter for both the high and low energies has noticeable improvements on dual-energy imaging with optimized parameters. Clinical implementation of a dual-energy technique on ExacTrac stereoscopic imaging could improve lung tumor visibility. © 2017 American Association of Physicists in Medicine.
Hashim, H A; Abido, M A
2015-01-01
This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed.
Design Methods and Optimization for Morphing Aircraft
NASA Technical Reports Server (NTRS)
Crossley, William A.
2005-01-01
This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.
Hashim, H. A.; Abido, M. A.
2015-01-01
This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed. PMID:25960738
The optimization of total laboratory automation by simulation of a pull-strategy.
Yang, Taho; Wang, Teng-Kuan; Li, Vincent C; Su, Chia-Lo
2015-01-01
Laboratory results are essential for physicians to diagnose medical conditions. Because of the critical role of medical laboratories, an increasing number of hospitals use total laboratory automation (TLA) to improve laboratory performance. Although the benefits of TLA are well documented, systems occasionally become congested, particularly when hospitals face peak demand. This study optimizes TLA operations. Firstly, value stream mapping (VSM) is used to identify the non-value-added time. Subsequently, batch processing control and parallel scheduling rules are devised and a pull mechanism that comprises a constant work-in-process (CONWIP) is proposed. Simulation optimization is then used to optimize the design parameters and to ensure a small inventory and a shorter average cycle time (CT). For empirical illustration, this approach is applied to a real case. The proposed methodology significantly improves the efficiency of laboratory work and leads to a reduction in patient waiting times and increased service level.
Assessment of Multiresolution Segmentation for Extracting Greenhouses from WORLDVIEW-2 Imagery
NASA Astrophysics Data System (ADS)
Aguilar, M. A.; Aguilar, F. J.; García Lorca, A.; Guirado, E.; Betlej, M.; Cichon, P.; Nemmaoui, A.; Vallario, A.; Parente, C.
2016-06-01
The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In this way, object based image analysis (OBIA) approach has been proved as the best option when working with VHR satellite imagery. OBIA considers spectral, geometric, textural and topological attributes associated with meaningful image objects. Thus, the first step of OBIA, referred to as segmentation, is to delineate objects of interest. Determination of an optimal segmentation is crucial for a good performance of the second stage in OBIA, the classification process. The main goal of this work is to assess the multiresolution segmentation algorithm provided by eCognition software for delineating greenhouses from WorldView- 2 multispectral orthoimages. Specifically, the focus is on finding the optimal parameters of the multiresolution segmentation approach (i.e., Scale, Shape and Compactness) for plastic greenhouses. The optimum Scale parameter estimation was based on the idea of local variance of object heterogeneity within a scene (ESP2 tool). Moreover, different segmentation results were attained by using different combinations of Shape and Compactness values. Assessment of segmentation quality based on the discrepancy between reference polygons and corresponding image segments was carried out to identify the optimal setting of multiresolution segmentation parameters. Three discrepancy indices were used: Potential Segmentation Error (PSE), Number-of-Segments Ratio (NSR) and Euclidean Distance 2 (ED2).
NASA Astrophysics Data System (ADS)
Nor Khairusshima, M. K.; Hafiz Zakwan, B. Muhammad; Suhaily, M.; Sharifah, I. S. S.; Shaffiar, N. M.; Rashid, M. A. N.
2018-01-01
Carbon Fibre Reinforced Plastic (CFRP) composite has become one of famous materials in industry, such as automotive, aeronautics, aerospace and aircraft. CFRP is attractive due to its properties, which promising better strength and high specification of mechanical properties other than its high resistance to corrosion. Other than being abrasive material due to the carbon nature, CFRP is an anisotropic material, which the knowledge of machining metal and steel cannot be applied during machining CFRP. The improper technique and parameters used to machine CFRP may result in high tool wear. This paper is to study the tool wear of 8 mm diameter carbide cutting tool during milling CFRP. To predict the suitable cutting parameters within range of 3500-6220 (rev/min), 200-245 (mm/min), and 0.4-1.8 (mm) for cutting speed, speed, feed rate and depth of cut respectively, which produce optimized result (less tool wear), Response Surface Methodology (RSM) has been used. Based on the developed mathematical model, feed rate was identified as the primary significant item that influenced tool wear. The optimized cutting parameters are cutting speed, feed and depth of cut of 3500 rev/min, 200 mm/min and 0.5 mm, respectively, with tool wear of 0.0267 mm. It is also can be observed that as the cutting speed and feed rate increased the tool wear is increasing.
Clemen, Christof B; Benderoth, Günther E K; Schmidt, Andreas; Hübner, Frank; Vogl, Thomas J; Silber, Gerhard
2017-01-01
In this study, useful methods for active human skeletal muscle material parameter determination are provided. First, a straightforward approach to the implementation of a transversely isotropic hyperelastic continuum mechanical material model in an invariant formulation is presented. This procedure is found to be feasible even if the strain energy is formulated in terms of invariants other than those predetermined by the software's requirements. Next, an appropriate experimental setup for the observation of activation-dependent material behavior, corresponding data acquisition, and evaluation is given. Geometry reconstruction based on magnetic resonance imaging of different deformation states is used to generate realistic, subject-specific finite element models of the upper arm. Using the deterministic SIMPLEX optimization strategy, a convenient quasi-static passive-elastic material characterization is pursued; the results of this approach used to characterize the behavior of human biceps in vivo indicate the feasibility of the illustrated methods to identify active material parameters comprising multiple loading modes. A comparison of a contact simulation incorporating the optimized parameters to a reconstructed deformed geometry of an indented upper arm shows the validity of the obtained results regarding deformation scenarios perpendicular to the effective direction of the nonactivated biceps. However, for a valid, activatable, general-purpose material characterization, the material model needs some modifications as well as a multicriteria optimization of the force-displacement data for different loading modes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Optimization of crystallization conditions for biological macromolecules.
McPherson, Alexander; Cudney, Bob
2014-11-01
For the successful X-ray structure determination of macromolecules, it is first necessary to identify, usually by matrix screening, conditions that yield some sort of crystals. Initial crystals are frequently microcrystals or clusters, and often have unfavorable morphologies or yield poor diffraction intensities. It is therefore generally necessary to improve upon these initial conditions in order to obtain better crystals of sufficient quality for X-ray data collection. Even when the initial samples are suitable, often marginally, refinement of conditions is recommended in order to obtain the highest quality crystals that can be grown. The quality of an X-ray structure determination is directly correlated with the size and the perfection of the crystalline samples; thus, refinement of conditions should always be a primary component of crystal growth. The improvement process is referred to as optimization, and it entails sequential, incremental changes in the chemical parameters that influence crystallization, such as pH, ionic strength and precipitant concentration, as well as physical parameters such as temperature, sample volume and overall methodology. It also includes the application of some unique procedures and approaches, and the addition of novel components such as detergents, ligands or other small molecules that may enhance nucleation or crystal development. Here, an attempt is made to provide guidance on how optimization might best be applied to crystal-growth problems, and what parameters and factors might most profitably be explored to accelerate and achieve success.
Optimization of crystallization conditions for biological macromolecules
McPherson, Alexander; Cudney, Bob
2014-01-01
For the successful X-ray structure determination of macromolecules, it is first necessary to identify, usually by matrix screening, conditions that yield some sort of crystals. Initial crystals are frequently microcrystals or clusters, and often have unfavorable morphologies or yield poor diffraction intensities. It is therefore generally necessary to improve upon these initial conditions in order to obtain better crystals of sufficient quality for X-ray data collection. Even when the initial samples are suitable, often marginally, refinement of conditions is recommended in order to obtain the highest quality crystals that can be grown. The quality of an X-ray structure determination is directly correlated with the size and the perfection of the crystalline samples; thus, refinement of conditions should always be a primary component of crystal growth. The improvement process is referred to as optimization, and it entails sequential, incremental changes in the chemical parameters that influence crystallization, such as pH, ionic strength and precipitant concentration, as well as physical parameters such as temperature, sample volume and overall methodology. It also includes the application of some unique procedures and approaches, and the addition of novel components such as detergents, ligands or other small molecules that may enhance nucleation or crystal development. Here, an attempt is made to provide guidance on how optimization might best be applied to crystal-growth problems, and what parameters and factors might most profitably be explored to accelerate and achieve success. PMID:25372810
NASA Technical Reports Server (NTRS)
Rosero, Enrique; Yang, Zong-Liang; Wagener, Thorsten; Gulden, Lindsey E.; Yatheendradas, Soni; Niu, Guo-Yue
2009-01-01
We use sensitivity analysis to identify the parameters that are most responsible for shaping land surface model (LSM) simulations and to understand the complex interactions in three versions of the Noah LSM: the standard version (STD), a version enhanced with a simple groundwater module (GW), and version augmented by a dynamic phenology module (DV). We use warm season, high-frequency, near-surface states and turbulent fluxes collected over nine sites in the US Southern Great Plains. We quantify changes in the pattern of sensitive parameters, the amount and nature of the interaction between parameters, and the covariance structure of the distribution of behavioral parameter sets. Using Sobol s total and first-order sensitivity indexes, we show that very few parameters directly control the variance of the model output. Significant parameter interaction occurs so that not only the optimal parameter values differ between models, but the relationships between parameters change. GW decreases parameter interaction and appears to improve model realism, especially at wetter sites. DV increases parameter interaction and decreases identifiability, implying it is overparameterized and/or underconstrained. A case study at a wet site shows GW has two functional modes: one that mimics STD and a second in which GW improves model function by decoupling direct evaporation and baseflow. Unsupervised classification of the posterior distributions of behavioral parameter sets cannot group similar sites based solely on soil or vegetation type, helping to explain why transferability between sites and models is not straightforward. This evidence suggests a priori assignment of parameters should also consider climatic differences.
Galvanin, Federico; Ballan, Carlo C; Barolo, Massimiliano; Bezzo, Fabrizio
2013-08-01
The use of pharmacokinetic (PK) and pharmacodynamic (PD) models is a common and widespread practice in the preliminary stages of drug development. However, PK-PD models may be affected by structural identifiability issues intrinsically related to their mathematical formulation. A preliminary structural identifiability analysis is usually carried out to check if the set of model parameters can be uniquely determined from experimental observations under the ideal assumptions of noise-free data and no model uncertainty. However, even for structurally identifiable models, real-life experimental conditions and model uncertainty may strongly affect the practical possibility to estimate the model parameters in a statistically sound way. A systematic procedure coupling the numerical assessment of structural identifiability with advanced model-based design of experiments formulations is presented in this paper. The objective is to propose a general approach to design experiments in an optimal way, detecting a proper set of experimental settings that ensure the practical identifiability of PK-PD models. Two simulated case studies based on in vitro bacterial growth and killing models are presented to demonstrate the applicability and generality of the methodology to tackle model identifiability issues effectively, through the design of feasible and highly informative experiments.
2009-10-01
phase and factors which may cause accelerated growth rates is key to achieving a reliable and robust bearing design . The end goal is to identify...key to achieving a reliable and robust bearing design . The end goal is to identify control parameters for optimizing bearing materials for improved...25.0 nm and were each fabricated from same material heats respectively to a custom design print to ABEC 5 quality and had split inner rings. Each had
Kovács, A; Berkó, Sz; Csányi, E; Csóka, I
2017-03-01
The aim of our present work was to evaluate the applicability of the Quality by Design (QbD) methodology in the development and optimalization of nanostructured lipid carriers containing salicyclic acid (NLC SA). Within the Quality by Design methology, special emphasis is layed on the adaptation of the initial risk assessment step in order to properly identify the critical material attributes and critical process parameters in formulation development. NLC SA products were formulated by the ultrasonication method using Compritol 888 ATO as solid lipid, Miglyol 812 as liquid lipid and Cremophor RH 60® as surfactant. LeanQbD Software and StatSoft. Inc. Statistica for Windows 11 were employed to indentify the risks. Three highly critical quality attributes (CQAs) for NLC SA were identified, namely particle size, particle size distribution and aggregation. Five attributes of medium influence were identified, including dissolution rate, dissolution efficiency, pH, lipid solubility of the active pharmaceutical ingredient (API) and entrapment efficiency. Three critical material attributes (CMA) and critical process parameters (CPP) were identified: surfactant concentration, solid lipid/liquid lipid ratio and ultrasonication time. The CMAs and CPPs are considered as independent variables and the CQAs are defined as dependent variables. The 2 3 factorial design was used to evaluate the role of the independent and dependent variables. Based on our experiments, an optimal formulation can be obtained when the surfactant concentration is set to 5%, the solid lipid/liquid lipid ratio is 7:3 and ultrasonication time is 20min. The optimal NLC SA showed narrow size distribution (0.857±0.014) with a mean particle size of 114±2.64nm. The NLC SA product showed a significantly higher in vitro drug release compared to the micro-particle reference preparation containing salicylic acid (MP SA). Copyright © 2016 Elsevier B.V. All rights reserved.
Wang, Xueqin; Cui, Hongyang; Shi, Jianhong; Zhao, Xinyu; Zhao, Yue; Wei, Zimin
2015-12-01
The aim of this study was to compare the bacterial structure of seven different composts. The primary environmental factors affecting bacterial species were identified, and a strategy to enhance the abundance of uncultured bacteria through controlling relevant environmental parameters was proposed. The results showed that the physical-chemical parameters of each different pile changed in its own manner during composting, which affected the structure and succession of bacteria in different ways. DGGE profiles showed that there were 10 prominent species during composting. Among them, four species existed in all compost types, two species existed in several piles and four species were detected in a single material. Redundancy analysis results showed that bacterial species compositions were significantly influenced by C/N and moisture (p<0.05). The optimal range of C/N was 14-27. Based on these results, the primary environmental factors affecting a certain species were further identified as a potential control of bacterial diversity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimal Control Method of Robot End Position and Orientation Based on Dynamic Tracking Measurement
NASA Astrophysics Data System (ADS)
Liu, Dalong; Xu, Lijuan
2018-01-01
In order to improve the accuracy of robot pose positioning and control, this paper proposed a dynamic tracking measurement robot pose optimization control method based on the actual measurement of D-H parameters of the robot, the parameters is taken with feedback compensation of the robot, according to the geometrical parameters obtained by robot pose tracking measurement, improved multi sensor information fusion the extended Kalan filter method, with continuous self-optimal regression, using the geometric relationship between joint axes for kinematic parameters in the model, link model parameters obtained can timely feedback to the robot, the implementation of parameter correction and compensation, finally we can get the optimal attitude angle, realize the robot pose optimization control experiments were performed. 6R dynamic tracking control of robot joint robot with independent research and development is taken as experimental subject, the simulation results show that the control method improves robot positioning accuracy, and it has the advantages of versatility, simplicity, ease of operation and so on.
Design of Experiments for the Thermal Characterization of Metallic Foam
NASA Technical Reports Server (NTRS)
Crittenden, Paul E.; Cole, Kevin D.
2003-01-01
Metallic foams are being investigated for possible use in the thermal protection systems of reusable launch vehicles. As a result, the performance of these materials needs to be characterized over a wide range of temperatures and pressures. In this paper a radiation/conduction model is presented for heat transfer in metallic foams. Candidates for the optimal transient experiment to determine the intrinsic properties of the model are found by two methods. First, an optimality criterion is used to find an experiment to find all of the parameters using one heating event. Second, a pair of heating events is used to determine the parameters in which one heating event is optimal for finding the parameters related to conduction, while the other heating event is optimal for finding the parameters associated with radiation. Simulated data containing random noise was analyzed to determine the parameters using both methods. In all cases the parameter estimates could be improved by analyzing a larger data record than suggested by the optimality criterion.
Optimization of seismic isolation systems via harmony search
NASA Astrophysics Data System (ADS)
Melih Nigdeli, Sinan; Bekdaş, Gebrail; Alhan, Cenk
2014-11-01
In this article, the optimization of isolation system parameters via the harmony search (HS) optimization method is proposed for seismically isolated buildings subjected to both near-fault and far-fault earthquakes. To obtain optimum values of isolation system parameters, an optimization program was developed in Matlab/Simulink employing the HS algorithm. The objective was to obtain a set of isolation system parameters within a defined range that minimizes the acceleration response of a seismically isolated structure subjected to various earthquakes without exceeding a peak isolation system displacement limit. Several cases were investigated for different isolation system damping ratios and peak displacement limitations of seismic isolation devices. Time history analyses were repeated for the neighbouring parameters of optimum values and the results proved that the parameters determined via HS were true optima. The performance of the optimum isolation system was tested under a second set of earthquakes that was different from the first set used in the optimization process. The proposed optimization approach is applicable to linear isolation systems. Isolation systems composed of isolation elements that are inherently nonlinear are the subject of a future study. Investigation of the optimum isolation system parameters has been considered in parametric studies. However, obtaining the best performance of a seismic isolation system requires a true optimization by taking the possibility of both near-fault and far-fault earthquakes into account. HS optimization is proposed here as a viable solution to this problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang
This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.« less
Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong
2017-03-01
Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.
Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong
2017-01-01
Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm. PMID:28257060
NASA Astrophysics Data System (ADS)
Monica, Z.; Sękala, A.; Gwiazda, A.; Banaś, W.
2016-08-01
Nowadays a key issue is to reduce the energy consumption of road vehicles. In particular solution one could find different strategies of energy optimization. The most popular but not sophisticated is so called eco-driving. In this strategy emphasized is particular behavior of drivers. In more sophisticated solution behavior of drivers is supported by control system measuring driving parameters and suggesting proper operation of the driver. The other strategy is concerned with application of different engineering solutions that aid optimization the process of energy consumption. Such systems take into consideration different parameters measured in real time and next take proper action according to procedures loaded to the control computer of a vehicle. The third strategy bases on optimization of the designed vehicle taking into account especially main sub-systems of a technical mean. In this approach the optimal level of energy consumption by a vehicle is obtained by synergetic results of individual optimization of particular constructional sub-systems of a vehicle. It is possible to distinguish three main sub-systems: the structural one the drive one and the control one. In the case of the structural sub-system optimization of the energy consumption level is related with the optimization or the weight parameter and optimization the aerodynamic parameter. The result is optimized body of a vehicle. Regarding the drive sub-system the optimization of the energy consumption level is related with the fuel or power consumption using the previously elaborated physical models. Finally the optimization of the control sub-system consists in determining optimal control parameters.
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong
2018-05-01
Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
Optimized model tuning in medical systems.
Kléma, Jirí; Kubalík, Jirí; Lhotská, Lenka
2005-12-01
In medical systems it is often advantageous to utilize specific problem situations (cases) in addition to or instead of a general model. Decisions are then based on relevant past cases retrieved from a case memory. The reliability of such decisions depends directly on the ability to identify cases of practical relevance to the current situation. This paper discusses issues of automated tuning in order to obtain a proper definition of mutual case similarity in a specific medical domain. The main focus is on a reasonably time-consuming optimization of the parameters that determine case retrieval and further utilization in decision making/ prediction. The two case studies - mortality prediction after cardiological intervention, and resource allocation at a spa - document that the optimization process is influenced by various characteristics of the problem domain.
Deposition efficiency optimization in cold spraying of metal-ceramic powder mixtures
NASA Astrophysics Data System (ADS)
Klinkov, S. V.; Kosarev, V. F.
2017-10-01
In the present paper, results of optimization of the cold spray deposition process of a metal-ceramic powder mixture involving impacts of ceramic particles onto coating surface are reported. In the optimization study, a two-probability model was used to take into account the surface activation induced by the ceramic component of the mixture. The dependence of mixture deposition efficiency on the concentration and size of ceramic particles was analysed to identify the ranges of both parameters in which the effect due to ceramic particles on the mixture deposition efficiency was positive. The dependences of the optimum size and concentration of ceramic particles, and also the maximum gain in deposition efficiency, on the probability of adhesion of metal particles to non-activated coating surface were obtained.
NASA Astrophysics Data System (ADS)
Chen, Z.; Chen, J.; Zheng, X.; Jiang, F.; Zhang, S.; Ju, W.; Yuan, W.; Mo, G.
2014-12-01
In this study, we explore the feasibility of optimizing ecosystem photosynthetic and respiratory parameters from the seasonal variation pattern of the net carbon flux. An optimization scheme is proposed to estimate two key parameters (Vcmax and Q10) by exploiting the seasonal variation in the net ecosystem carbon flux retrieved by an atmospheric inversion system. This scheme is implemented to estimate Vcmax and Q10 of the Boreal Ecosystem Productivity Simulator (BEPS) to improve its NEP simulation in the Boreal North America (BNA) region. Simultaneously, in-situ NEE observations at six eddy covariance sites are used to evaluate the NEE simulations. The results show that the performance of the optimized BEPS is superior to that of the BEPS with the default parameter values. These results have the implication on using atmospheric CO2 data for optimizing ecosystem parameters through atmospheric inversion or data assimilation techniques.
Gaussian mass optimization for kernel PCA parameters
NASA Astrophysics Data System (ADS)
Liu, Yong; Wang, Zulin
2011-10-01
This paper proposes a novel kernel parameter optimization method based on Gaussian mass, which aims to overcome the current brute force parameter optimization method in a heuristic way. Generally speaking, the choice of kernel parameter should be tightly related to the target objects while the variance between the samples, the most commonly used kernel parameter, doesn't possess much features of the target, which gives birth to Gaussian mass. Gaussian mass defined in this paper has the property of the invariance of rotation and translation and is capable of depicting the edge, topology and shape information. Simulation results show that Gaussian mass leads a promising heuristic optimization boost up for kernel method. In MNIST handwriting database, the recognition rate improves by 1.6% compared with common kernel method without Gaussian mass optimization. Several promising other directions which Gaussian mass might help are also proposed at the end of the paper.
Assessment of parameter regionalization methods for modeling flash floods in China
NASA Astrophysics Data System (ADS)
Ragettli, Silvan; Zhou, Jian; Wang, Haijing
2017-04-01
Rainstorm flash floods are a common and serious phenomenon during the summer months in many hilly and mountainous regions of China. For this study, we develop a modeling strategy for simulating flood events in small river basins of four Chinese provinces (Shanxi, Henan, Beijing, Fujian). The presented research is part of preliminary investigations for the development of a national operational model for predicting and forecasting hydrological extremes in basins of size 10 - 2000 km2, whereas most of these basins are ungauged or poorly gauged. The project is supported by the China Institute of Water Resources and Hydropower Research within the framework of the national initiative for flood prediction and early warning system for mountainous regions in China (research project SHZH-IWHR-73). We use the USGS Precipitation-Runoff Modeling System (PRMS) as implemented in the Java modeling framework Object Modeling System (OMS). PRMS can operate at both daily and storm timescales, switching between the two using a precipitation threshold. This functionality allows the model to perform continuous simulations over several years and to switch to the storm mode to simulate storm response in greater detail. The model was set up for fifteen watersheds for which hourly precipitation and runoff data were available. First, automatic calibration based on the Shuffled Complex Evolution method was applied to different hydrological response unit (HRU) configurations. The Nash-Sutcliffe efficiency (NSE) was used as assessment criteria, whereas only runoff data from storm events were considered. HRU configurations reflect the drainage-basin characteristics and depend on assumptions regarding drainage density and minimum HRU size. We then assessed the sensitivity of optimal parameters to different HRU configurations. Finally, the transferability to other watersheds of optimal model parameters that were not sensitive to HRU configurations was evaluated. Model calibration for the 15 catchments resulted in good model performance (NSE > 0.5) in 10 and medium performance (NSE > 0.2) in 3 catchments. Optimal model parameters proofed to be relatively insensitive to different HRU configurations. This suggests that dominant controls on hydrologic parameter transfer can potentially be identified based on catchment attributes describing meteorological, geological or landscape characteristics. Parameter regionalization based on a principal component analysis (PCA) nearest neighbor search (using all available catchment attributes) resulted in a 54% success rate in transferring optimal parameter sets and still yielding acceptable model performance. Data from more catchments are required to further increase the parameter transferability success rate or to develop regionalization strategies for individual parameters.
Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan
2007-05-01
In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.
NASA Astrophysics Data System (ADS)
Alzraiee, Ayman H.; Bau, Domenico A.; Garcia, Luis A.
2013-06-01
Effective sampling of hydrogeological systems is essential in guiding groundwater management practices. Optimal sampling of groundwater systems has previously been formulated based on the assumption that heterogeneous subsurface properties can be modeled using a geostatistical approach. Therefore, the monitoring schemes have been developed to concurrently minimize the uncertainty in the spatial distribution of systems' states and parameters, such as the hydraulic conductivity K and the hydraulic head H, and the uncertainty in the geostatistical model of system parameters using a single objective function that aggregates all objectives. However, it has been shown that the aggregation of possibly conflicting objective functions is sensitive to the adopted aggregation scheme and may lead to distorted results. In addition, the uncertainties in geostatistical parameters affect the uncertainty in the spatial prediction of K and H according to a complex nonlinear relationship, which has often been ineffectively evaluated using a first-order approximation. In this study, we propose a multiobjective optimization framework to assist the design of monitoring networks of K and H with the goal of optimizing their spatial predictions and estimating the geostatistical parameters of the K field. The framework stems from the combination of a data assimilation (DA) algorithm and a multiobjective evolutionary algorithm (MOEA). The DA algorithm is based on the ensemble Kalman filter, a Monte-Carlo-based Bayesian update scheme for nonlinear systems, which is employed to approximate the posterior uncertainty in K, H, and the geostatistical parameters of K obtained by collecting new measurements. Multiple MOEA experiments are used to investigate the trade-off among design objectives and identify the corresponding monitoring schemes. The methodology is applied to design a sampling network for a shallow unconfined groundwater system located in Rocky Ford, Colorado. Results indicate that the effect of uncertainties associated with the geostatistical parameters on the spatial prediction might be significantly alleviated (by up to 80% of the prior uncertainty in K and by 90% of the prior uncertainty in H) by sampling evenly distributed measurements with a spatial measurement density of more than 1 observation per 60 m × 60 m grid block. In addition, exploration of the interaction of objective functions indicates that the ability of head measurements to reduce the uncertainty associated with the correlation scale is comparable to the effect of hydraulic conductivity measurements.
Optimization and Simulation of SLM Process for High Density H13 Tool Steel Parts
NASA Astrophysics Data System (ADS)
Laakso, Petri; Riipinen, Tuomas; Laukkanen, Anssi; Andersson, Tom; Jokinen, Antero; Revuelta, Alejandro; Ruusuvuori, Kimmo
This paper demonstrates the successful printing and optimization of processing parameters of high-strength H13 tool steel by Selective Laser Melting (SLM). D-Optimal Design of Experiments (DOE) approach is used for parameter optimization of laser power, scanning speed and hatch width. With 50 test samples (1×1×1cm) we establish parameter windows for these three parameters in relation to part density. The calculated numerical model is found to be in good agreement with the density data obtained from the samples using image analysis. A thermomechanical finite element simulation model is constructed of the SLM process and validated by comparing the calculated densities retrieved from the model with the experimentally determined densities. With the simulation tool one can explore the effect of different parameters on density before making any printed samples. Establishing a parameter window provides the user with freedom for parameter selection such as choosing parameters that result in fastest print speed.
Probability distribution functions for unit hydrographs with optimization using genetic algorithm
NASA Astrophysics Data System (ADS)
Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh
2017-05-01
A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
NASA Astrophysics Data System (ADS)
Knox, H. A.; Draelos, T.; Young, C. J.; Lawry, B.; Chael, E. P.; Faust, A.; Peterson, M. G.
2015-12-01
The quality of automatic detections from seismic sensor networks depends on a large number of data processing parameters that interact in complex ways. The largely manual process of identifying effective parameters is painstaking and does not guarantee that the resulting controls are the optimal configuration settings. Yet, achieving superior automatic detection of seismic events is closely related to these parameters. We present an automated sensor tuning (AST) system that learns near-optimal parameter settings for each event type using neuro-dynamic programming (reinforcement learning) trained with historic data. AST learns to test the raw signal against all event-settings and automatically self-tunes to an emerging event in real-time. The overall goal is to reduce the number of missed legitimate event detections and the number of false event detections. Reducing false alarms early in the seismic pipeline processing will have a significant impact on this goal. Applicable both for existing sensor performance boosting and new sensor deployment, this system provides an important new method to automatically tune complex remote sensing systems. Systems tuned in this way will achieve better performance than is currently possible by manual tuning, and with much less time and effort devoted to the tuning process. With ground truth on detections in seismic waveforms from a network of stations, we show that AST increases the probability of detection while decreasing false alarms.
Weak value amplification considered harmful
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-03-01
We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.
NASA Astrophysics Data System (ADS)
Babagowda; Kadadevara Math, R. S.; Goutham, R.; Srinivas Prasad, K. R.
2018-02-01
Fused deposition modeling is a rapidly growing additive manufacturing technology due to its ability to build functional parts having complex geometry. The mechanical properties of the build part is depends on several process parameters and build material of the printed specimen. The aim of this study is to characterize and optimize the parameters such as layer thickness and PLA build material which is mixed with recycled PLA material. Tensile and flexural or bending test are carried out to determine the mechanical response characteristics of the printed specimen. Taguchi method is used for number of experiments and Taguchi S/N ratio is used to identify the set of parameters which give good results for respective response characteristics, effectiveness of each parameters is investigated by using analysis of variance (ANOVA).
Lai, Chi-Chih; Friedman, Michael; Lin, Hsin-Ching; Wang, Pa-Chun; Hwang, Michelle S; Hsu, Cheng-Ming; Lin, Meng-Chih; Chin, Chien-Hung
2015-08-01
To identify standard clinical parameters that may predict the optimal level of continuous positive airway pressure (CPAP) in adult patients with obstructive sleep apnea/hypopnea syndrome (OSAHS). This is a retrospective study in a tertiary academic medical center that included 129 adult patients (117 males and 12 females) with OSAHS confirmed by diagnostic polysomnography (PSG). All OSAHS patients underwent successful full-night manual titration to determine the optimal CPAP pressure level for OSAHS treatment. The PSG parameters and completed physical examination, including body mass index, tonsil size grading, modified Mallampati grade (also known as updated Friedman's tongue position [uFTP]), uvular length, neck circumference, waist circumference, hip circumference, thyroid-mental distance, and hyoid-mental distance (HMD) were recorded. When the physical examination variables and OSAHS disease were correlated singly with the optimal CPAP pressure, we found that uFTP, HMD, and apnea/hypopnea index (AHI) were reliable predictors of CPAP pressures (P = .013, P = .002, and P < .001, respectively, by multiple regression). When all important factors were considered in a stepwise multiple linear regression analysis, a significant correlation with optimal CPAP pressure was formulated by factoring the uFTP, HMD, and AHI (optimal CPAP pressure = 1.01 uFTP + 0.74 HMD + 0.059 AHI - 1.603). This study distinguished the correlation between uFTP, HMD, and AHI with the optimal CPAP pressure. The structure of the upper airway (especially tongue base obstruction) and disease severity may predict the effective level of CPAP pressure. 4. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
NASA Astrophysics Data System (ADS)
Grippa, Tais; Georganos, Stefanos; Lennert, Moritz; Vanhuysse, Sabine; Wolff, Eléonore
2017-10-01
Mapping large heterogeneous urban areas using object-based image analysis (OBIA) remains challenging, especially with respect to the segmentation process. This could be explained both by the complex arrangement of heterogeneous land-cover classes and by the high diversity of urban patterns which can be encountered throughout the scene. In this context, using a single segmentation parameter to obtain satisfying segmentation results for the whole scene can be impossible. Nonetheless, it is possible to subdivide the whole city into smaller local zones, rather homogeneous according to their urban pattern. These zones can then be used to optimize the segmentation parameter locally, instead of using the whole image or a single representative spatial subset. This paper assesses the contribution of a local approach for the optimization of segmentation parameter compared to a global approach. Ouagadougou, located in sub-Saharan Africa, is used as case studies. First, the whole scene is segmented using a single globally optimized segmentation parameter. Second, the city is subdivided into 283 local zones, homogeneous in terms of building size and building density. Each local zone is then segmented using a locally optimized segmentation parameter. Unsupervised segmentation parameter optimization (USPO), relying on an optimization function which tends to maximize both intra-object homogeneity and inter-object heterogeneity, is used to select the segmentation parameter automatically for both approaches. Finally, a land-use/land-cover classification is performed using the Random Forest (RF) classifier. The results reveal that the local approach outperforms the global one, especially by limiting confusions between buildings and their bare-soil neighbors.
Law, Phillip C F; Miller, Steven M; Ngo, Trung T
2017-11-01
Binocular rivalry (BR) occurs when conflicting images concurrently presented to corresponding retinal locations of each eye stochastically alternate in perception. Anomalies of BR rate have been examined in a range of clinical psychiatric conditions. In particular, slow BR rate has been proposed as an endophenotype for bipolar disorder (BD) to improve power in large-scale genome-wide association studies. Examining the validity of BR rate as a BD endophenotype however requires large-scale datasets (n=1000s to 10,000s), a standardized testing protocol, and optimization of stimulus parameters to maximize separation between BD and healthy groups. Such requirements are indeed relevant to all clinical psychiatric BR studies. Here we address the issue of stimulus optimization by examining the effect of stimulus parameter variation on BR rate and mixed-percept duration (MPD) in healthy individuals. We aimed to identify the stimulus parameters that induced the fastest BR rates with the least MPD. Employing a repeated-measures within-subjects design, 40 healthy adults completed four BR tasks using orthogonally drifting grating stimuli that varied in drift speed and aperture size. Pairwise comparisons were performed to determine modulation of BR rate and MPD by these stimulus parameters, and individual variation of such modulation was also assessed. From amongst the stimulus parameters examined, we found that 8cycles/s drift speed in a 1.5° aperture induced the fastest BR rate without increasing MPD, but that BR rate with this stimulus configuration was not substantially different to BR rate with stimulus parameters we have used in previous studies (i.e., 4cycles/s drift speed in a 1.5° aperture). In addition to contributing to stimulus optimization issues, the findings have implications for Levelt's Proposition IV of binocular rivalry dynamics and individual differences in such dynamics. Copyright © 2017 Elsevier Inc. All rights reserved.
Byron, Kelly; Bluvshtein, Vlad; Lucke, Lori
2013-01-01
Transcutaneous energy transmission systems (TETS) wirelessly transmit power through the skin. TETS is particularly desirable for ventricular assist devices (VAD), which currently require cables through the skin to power the implanted pump. Optimizing the inductive link of the TET system is a multi-parameter problem. Most current techniques to optimize the design simplify the problem by combining parameters leading to sub-optimal solutions. In this paper we present an optimization method using a genetic algorithm to handle a larger set of parameters, which leads to a more optimal design. Using this approach, we were able to increase efficiency while also reducing power variability in a prototype, compared to a traditional manual design method.
NASA Astrophysics Data System (ADS)
Bean, Glenn E.; Witkin, David B.; McLouth, Tait D.; Zaldivar, Rafael J.
2018-02-01
Research on the selective laser melting (SLM) method of laser powder bed fusion additive manufacturing (AM) has shown that surface and internal quality of AM parts is directly related to machine settings such as laser energy density, scanning strategies, and atmosphere. To optimize laser parameters for improved component quality, the energy density is typically controlled via laser power, scanning rate, and scanning strategy, but can also be controlled by changing the spot size via laser focal plane shift. Present work being conducted by The Aerospace Corporation was initiated after observing inconsistent build quality of parts printed using OEM-installed settings. Initial builds of Inconel 718 witness geometries using OEM laser parameters were evaluated for surface roughness, density, and porosity while varying energy density via laser focus shift. Based on these results, hardware and laser parameter adjustments were conducted in order to improve build quality and consistency. Tensile testing was also conducted to investigate the effect of build plate location and laser settings on SLM 718. This work has provided insight into the limitations of OEM parameters compared with optimized parameters towards the goal of manufacturing aerospace-grade parts, and has led to the development of a methodology for laser parameter tuning that can be applied to other alloy systems. Additionally, evidence was found that for 718, which derives its strength from post-manufacturing heat treatment, there is a possibility that tensile testing may not be perceptive to defects which would reduce component performance. Ongoing research is being conducted towards identifying appropriate testing and analysis methods for screening and quality assurance.
Metal Matrix Superconductor Composites for SMES-Driven, Ultra High Power BEP Applications: Part 2
NASA Astrophysics Data System (ADS)
Gross, Dan A.; Myrabo, Leik N.
2006-05-01
A 2.5 TJ superconducting magnetic energy storage (SMES) design presentation is continued from the preceding paper (Part 1) with electromagnetic and associated stress analysis. The application of interest is a rechargeable power-beaming infrastructure for manned microwave Lightcraft operations. It is demonstrated that while operational performance is within manageable parameter bounds, quench (loss of superconducting state) imposes enormous electrical stresses. Therefore, alternative multiple toroid modular configurations are identified, alleviating simultaneously all excessive stress conditions, operational and quench, in the structural, thermal and electromagnetic sense — at some reduction in specific energy, but presenting programmatic advantages for a lengthy technology development, demonstration and operation schedule. To this end several natural units, based on material properties and operating parameters are developed, in order to identify functional relationships and optimization paths more effectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, Jim; Flicker, Dawn; Ide, Kayo
2006-05-20
This paper builds upon our recent data assimilation work with the extended Kalman filter (EKF) method [J. Kao, D. Flicker, R. Henninger, S. Frey, M. Ghil, K. Ide, Data assimilation with an extended Kalman filter for an impact-produced shock-wave study, J. Comp. Phys. 196 (2004) 705-723.]. The purpose is to test the capability of EKF in optimizing a model's physical parameters. The problem is to simulate the evolution of a shock produced through a high-speed flyer plate. In the earlier work, we have showed that the EKF allows one to estimate the evolving state of the shock wave from amore » single pressure measurement, assuming that all model parameters are known. In the present paper, we show that imperfectly known model parameters can also be estimated accordingly, along with the evolving model state, from the same single measurement. The model parameter optimization using the EKF can be achieved through a simple modification of the original EKF formalism by including the model parameters into an augmented state variable vector. While the regular state variables are governed by both deterministic and stochastic forcing mechanisms, the parameters are only subject to the latter. The optimally estimated model parameters are thus obtained through a unified assimilation operation. We show that improving the accuracy of the model parameters also improves the state estimate. The time variation of the optimized model parameters results from blending the data and the corresponding values generated from the model and lies within a small range, of less than 2%, from the parameter values of the original model. The solution computed with the optimized parameters performs considerably better and has a smaller total variance than its counterpart using the original time-constant parameters. These results indicate that the model parameters play a dominant role in the performance of the shock-wave hydrodynamic code at hand.« less
TH-E-BRF-06: Kinetic Modeling of Tumor Response to Fractionated Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, H; Gordon, J; Chetty, I
2014-06-15
Purpose: Accurate calibration of radiobiological parameters is crucial to predicting radiation treatment response. Modeling differences may have a significant impact on calibrated parameters. In this study, we have integrated two existing models with kinetic differential equations to formulate a new tumor regression model for calibrating radiobiological parameters for individual patients. Methods: A system of differential equations that characterizes the birth-and-death process of tumor cells in radiation treatment was analytically solved. The solution of this system was used to construct an iterative model (Z-model). The model consists of three parameters: tumor doubling time Td, half-life of dying cells Tr and cellmore » survival fraction SFD under dose D. The Jacobian determinant of this model was proposed as a constraint to optimize the three parameters for six head and neck cancer patients. The derived parameters were compared with those generated from the two existing models, Chvetsov model (C-model) and Lim model (L-model). The C-model and L-model were optimized with the parameter Td fixed. Results: With the Jacobian-constrained Z-model, the mean of the optimized cell survival fractions is 0.43±0.08, and the half-life of dying cells averaged over the six patients is 17.5±3.2 days. The parameters Tr and SFD optimized with the Z-model differ by 1.2% and 20.3% from those optimized with the Td-fixed C-model, and by 32.1% and 112.3% from those optimized with the Td-fixed L-model, respectively. Conclusion: The Z-model was analytically constructed from the cellpopulation differential equations to describe changes in the number of different tumor cells during the course of fractionated radiation treatment. The Jacobian constraints were proposed to optimize the three radiobiological parameters. The developed modeling and optimization methods may help develop high-quality treatment regimens for individual patients.« less
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.
2014-10-01
Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.
Robust input design for nonlinear dynamic modeling of AUV.
Nouri, Nowrouz Mohammad; Valadi, Mehrdad
2017-09-01
Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Ayvaz, M Tamer
2010-09-20
This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Feuereisen, Michelle M; Gamero Barraza, Mariana; Zimmermann, Benno F; Schieber, Andreas; Schulze-Kaysers, Nadine
2017-01-01
Response surface methodology was employed to investigate the effects of pressurized liquid extraction (PLE) parameters on the recovery of phenolic compounds (anthocyanins, biflavonoids) from Brazilian pepper (Schinus terebinthifolius Raddi) fruits. The effects of temperature, static time, and ethanol as well as acid concentration on the polyphenol yield were described well by quadratic models (p<0.0001). A significant influence of the ethanol concentration (p<0.0001) and several interactions (p<0.05) were identified. Identification of the biflavonoid I3',II8-binaringenin in drupes of S. terebinthifolius was achieved by UHPLC-MS(2). Interestingly, at high extraction temperatures (>75°C), an artifact occurred and was tentatively identified as a diastereomer of I3',II8-binaringenin. Multivariate optimization led to high yields of phenolic compounds from the exocarp/drupes at 100/75°C, 10/10min, 54.5/54.2% ethanol, and 5/0.03% acetic acid. This study demonstrates that PLE is well suited for the extraction of phenolic compounds from S. terebinthifolius and can efficiently be optimized by response surface methodology. Copyright © 2016 Elsevier Ltd. All rights reserved.
Hill, Mary C.; Faunt, Claudia C.; Belcher, Wayne; Sweetkind, Donald; Tiedeman, Claire; Kavetski, Dmitri
2013-01-01
This work demonstrates how available knowledge can be used to build more transparent and refutable computer models of groundwater systems. The Death Valley regional groundwater flow system, which surrounds a proposed site for a high level nuclear waste repository of the United States of America, and the Nevada National Security Site (NNSS), where nuclear weapons were tested, is used to explore model adequacy, identify parameters important to (and informed by) observations, and identify existing old and potential new observations important to predictions. Model development is pursued using a set of fundamental questions addressed with carefully designed metrics. Critical methods include using a hydrogeologic model, managing model nonlinearity by designing models that are robust while maintaining realism, using error-based weighting to combine disparate types of data, and identifying important and unimportant parameters and observations and optimizing parameter values with computationally frugal schemes. The frugal schemes employed in this study require relatively few (10–1000 s), parallelizable model runs. This is beneficial because models able to approximate the complex site geology defensibly tend to have high computational cost. The issue of model defensibility is particularly important given the contentious political issues involved.
Application of Layered Perforation Profile Control Technique to Low Permeable Reservoir
NASA Astrophysics Data System (ADS)
Wei, Sun
2018-01-01
it is difficult to satisfy the demand of profile control of complex well section and multi-layer reservoir by adopting the conventional profile control technology, therefore, a research is conducted on adjusting the injection production profile with layered perforating parameters optimization. i.e. in the case of coproduction for multi-layer, water absorption of each layer is adjusted by adjusting the perforating parameters, thus to balance the injection production profile of the whole well section, and ultimately enhance the oil displacement efficiency of water flooding. By applying the relationship between oil-water phase percolation theory/perforating damage and capacity, a mathematic model of adjusting the injection production profile with layered perforating parameters optimization, besides, perforating parameters optimization software is programmed. Different types of optimization design work are carried out according to different geological conditions and construction purposes by using the perforating optimization design software; furthermore, an application test is done for low permeable reservoir, and the water injection profile tends to be balanced significantly after perforation with optimized parameters, thereby getting a good application effect on site.
Search Parameter Optimization for Discrete, Bayesian, and Continuous Search Algorithms
2017-09-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS SEARCH PARAMETER OPTIMIZATION FOR DISCRETE , BAYESIAN, AND CONTINUOUS SEARCH ALGORITHMS by...to 09-22-2017 4. TITLE AND SUBTITLE SEARCH PARAMETER OPTIMIZATION FOR DISCRETE , BAYESIAN, AND CON- TINUOUS SEARCH ALGORITHMS 5. FUNDING NUMBERS 6...simple search and rescue acts to prosecuting aerial/surface/submersible targets on mission. This research looks at varying the known discrete and
Sex and Aggregation-Sex Pheromones of Cerambycid Beetles: Basic Science and Practical Applications.
Hanks, Lawrence M; Millar, Jocelyn G
2016-07-01
Research since 2004 has shown that the use of volatile attractants and pheromones is widespread in the large beetle family Cerambycidae, with pheromones now identified from more than 100 species, and likely pheromones for many more. The pheromones identified to date from species in the subfamilies Cerambycinae, Spondylidinae, and Lamiinae are all male-produced aggregation-sex pheromones that attract both sexes, whereas all known examples for species in the subfamilies Prioninae and Lepturinae are female-produced sex pheromones that attract only males. Here, we summarize the chemistry of the known pheromones, and the optimal methods for their collection, analysis, and synthesis. Attraction of cerambycids to host plant volatiles, interactions between their pheromones and host plant volatiles, and the implications of pheromone chemistry for invasion biology are discussed. We also describe optimized traps, lures, and operational parameters for practical applications of the pheromones in detection, sampling, and management of cerambycids.
Constant Communities in Complex Networks
NASA Astrophysics Data System (ADS)
Chakraborty, Tanmoy; Srinivasan, Sriram; Ganguly, Niloy; Bhowmick, Sanjukta; Mukherjee, Animesh
2013-05-01
Identifying community structure is a fundamental problem in network analysis. Most community detection algorithms are based on optimizing a combinatorial parameter, for example modularity. This optimization is generally NP-hard, thus merely changing the vertex order can alter their assignments to the community. However, there has been less study on how vertex ordering influences the results of the community detection algorithms. Here we identify and study the properties of invariant groups of vertices (constant communities) whose assignment to communities are, quite remarkably, not affected by vertex ordering. The percentage of constant communities can vary across different applications and based on empirical results we propose metrics to evaluate these communities. Using constant communities as a pre-processing step, one can significantly reduce the variation of the results. Finally, we present a case study on phoneme network and illustrate that constant communities, quite strikingly, form the core functional units of the larger communities.
Utility of coupling nonlinear optimization methods with numerical modeling software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, M.J.
1996-08-05
Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less
Zhou, Xiangyang; Zhao, Beilei; Gong, Guohao
2015-08-14
This paper presents a method based on co-simulation of a mechatronic system to optimize the control parameters of a two-axis inertially stabilized platform system (ISP) applied in an unmanned airship (UA), by which high control performance and reliability of the ISP system are achieved. First, a three-dimensional structural model of the ISP is built by using the three-dimensional parametric CAD software SOLIDWORKS(®); then, to analyze the system's kinematic and dynamic characteristics under operating conditions, dynamics modeling is conducted by using the multi-body dynamics software ADAMS™, thus the main dynamic parameters such as displacement, velocity, acceleration and reaction curve are obtained, respectively, through simulation analysis. Then, those dynamic parameters were input into the established MATLAB(®) SIMULINK(®) controller to simulate and test the performance of the control system. By these means, the ISP control parameters are optimized. To verify the methods, experiments were carried out by applying the optimized parameters to the control system of a two-axis ISP. The results show that the co-simulation by using virtual prototyping (VP) is effective to obtain optimized ISP control parameters, eventually leading to high ISP control performance.
Zhou, Xiangyang; Zhao, Beilei; Gong, Guohao
2015-01-01
This paper presents a method based on co-simulation of a mechatronic system to optimize the control parameters of a two-axis inertially stabilized platform system (ISP) applied in an unmanned airship (UA), by which high control performance and reliability of the ISP system are achieved. First, a three-dimensional structural model of the ISP is built by using the three-dimensional parametric CAD software SOLIDWORKS®; then, to analyze the system’s kinematic and dynamic characteristics under operating conditions, dynamics modeling is conducted by using the multi-body dynamics software ADAMS™, thus the main dynamic parameters such as displacement, velocity, acceleration and reaction curve are obtained, respectively, through simulation analysis. Then, those dynamic parameters were input into the established MATLAB® SIMULINK® controller to simulate and test the performance of the control system. By these means, the ISP control parameters are optimized. To verify the methods, experiments were carried out by applying the optimized parameters to the control system of a two-axis ISP. The results show that the co-simulation by using virtual prototyping (VP) is effective to obtain optimized ISP control parameters, eventually leading to high ISP control performance. PMID:26287210
Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.
Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis
2008-10-01
We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)
SU-D-12A-06: A Comprehensive Parameter Analysis for Low Dose Cone-Beam CT Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, W; Southern Medical University, Guangzhou; Yan, H
Purpose: There is always a parameter in compressive sensing based iterative reconstruction (IR) methods low dose cone-beam CT (CBCT), which controls the weight of regularization relative to data fidelity. A clear understanding of the relationship between image quality and parameter values is important. The purpose of this study is to investigate this subject based on experimental data and a representative advanced IR algorithm using Tight-frame (TF) regularization. Methods: Three data sets of a Catphan phantom acquired at low, regular and high dose levels are used. For each tests, 90 projections covering a 200-degree scan range are used for reconstruction. Threemore » different regions-of-interest (ROIs) of different contrasts are used to calculate contrast-to-noise ratios (CNR) for contrast evaluation. A single point structure is used to measure modulation transfer function (MTF) for spatial-resolution evaluation. Finally, we analyze CNRs and MTFs to study the relationship between image quality and parameter selections. Results: It was found that: 1) there is no universal optimal parameter. The optimal parameter value depends on specific task and dose level. 2) There is a clear trade-off between CNR and resolution. The parameter for the best CNR is always smaller than that for the best resolution. 3) Optimal parameters are also dose-specific. Data acquired under a high dose protocol require less regularization, yielding smaller optimal parameter values. 4) Comparing with conventional FDK images, TF-based CBCT images are better under a certain optimally selected parameters. The advantages are more obvious for low dose data. Conclusion: We have investigated the relationship between image quality and parameter values in the TF-based IR algorithm. Preliminary results indicate optimal parameters are specific to both the task types and dose levels, providing guidance for selecting parameters in advanced IR algorithms. This work is supported in part by NIH (1R01CA154747-01)« less
NASA Astrophysics Data System (ADS)
Bukhari, Hassan J.
2017-12-01
In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.
NASA Astrophysics Data System (ADS)
Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.
2017-03-01
Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.
Optimization of parameters of special asynchronous electric drives
NASA Astrophysics Data System (ADS)
Karandey, V. Yu; Popov, B. K.; Popova, O. B.; Afanasyev, V. L.
2018-03-01
The article considers the solution of the problem of parameters optimization of special asynchronous electric drives. The solution of the problem will allow one to project and create special asynchronous electric drives for various industries. The created types of electric drives will have optimum mass-dimensional and power parameters. It will allow one to realize and fulfill the set characteristics of management of technological processes with optimum level of expenses of electric energy, time of completing the process or other set parameters. The received decision allows one not only to solve a certain optimizing problem, but also to construct dependences between the optimized parameters of special asynchronous electric drives, for example, with the change of power, current in a winding of the stator or rotor, induction in a gap or steel of magnetic conductors and other parameters. On the constructed dependences, it is possible to choose necessary optimum values of parameters of special asynchronous electric drives and their components without carrying out repeated calculations.
Using geometry to improve model fitting and experiment design for glacial isostasy
NASA Astrophysics Data System (ADS)
Kachuck, S. B.; Cathles, L. M.
2017-12-01
As scientists we routinely deal with models, which are geometric objects at their core - the manifestation of a set of parameters as predictions for comparison with observations. When the number of observations exceeds the number of parameters, the model is a hypersurface (the model manifold) in the space of all possible predictions. The object of parameter fitting is to find the parameters corresponding to the point on the model manifold as close to the vector of observations as possible. But the geometry of the model manifold can make this difficult. By curving, ending abruptly (where, for instance, parameters go to zero or infinity), and by stretching and compressing the parameters together in unexpected directions, it can be difficult to design algorithms that efficiently adjust the parameters. Even at the optimal point on the model manifold, parameters might not be individually resolved well enough to be applied to new contexts. In our context of glacial isostatic adjustment, models of sparse surface observations have a broad spread of sensitivity to mixtures of the earth's viscous structure and the surface distribution of ice over the last glacial cycle. This impedes precise statements about crucial geophysical processes, such as the planet's thermal history or the climates that controlled the ice age. We employ geometric methods developed in the field of systems biology to improve the efficiency of fitting (geodesic accelerated Levenberg-Marquardt) and to identify the maximally informative sources of additional data to make better predictions of sea levels and ice configurations (optimal experiment design). We demonstrate this in particular in reconstructions of the Barents Sea Ice Sheet, where we show that only certain kinds of data from the central Barents have the power to distinguish between proposed models.
NASA Astrophysics Data System (ADS)
Simon, Ehouarn; Samuelsen, Annette; Bertino, Laurent; Mouysset, Sandrine
2015-12-01
A sequence of one-year combined state-parameter estimation experiments has been conducted in a North Atlantic and Arctic Ocean configuration of the coupled physical-biogeochemical model HYCOM-NORWECOM over the period 2007-2010. The aim is to evaluate the ability of an ensemble-based data assimilation method to calibrate ecosystem model parameters in a pre-operational setting, namely the production of the MyOcean pilot reanalysis of the Arctic biology. For that purpose, four biological parameters (two phyto- and two zooplankton mortality rates) are estimated by assimilating weekly data such as, satellite-derived Sea Surface Temperature, along-track Sea Level Anomalies, ice concentrations and chlorophyll-a concentrations with an Ensemble Kalman Filter. The set of optimized parameters locally exhibits seasonal variations suggesting that time-dependent parameters should be used in ocean ecosystem models. A clustering analysis of the optimized parameters is performed in order to identify consistent ecosystem regions. In the north part of the domain, where the ecosystem model is the most reliable, most of them can be associated with Longhurst provinces and new provinces emerge in the Arctic Ocean. However, the clusters do not coincide anymore with the Longhurst provinces in the Tropics due to large model errors. Regarding the ecosystem state variables, the assimilation of satellite-derived chlorophyll concentration leads to significant reduction of the RMS errors in the observed variables during the first year, i.e. 2008, compared to a free run simulation. However, local filter divergences of the parameter component occur in 2009 and result in an increase in the RMS error at the time of the spring bloom.
Machine Learning Techniques for Global Sensitivity Analysis in Climate Models
NASA Astrophysics Data System (ADS)
Safta, C.; Sargsyan, K.; Ricciuto, D. M.
2017-12-01
Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.
NASA Astrophysics Data System (ADS)
Belwanshi, Vinod; Topkar, Anita
2016-05-01
Finite element analysis study has been carried out to optimize the design parameters for bulk micro-machined silicon membranes for piezoresistive pressure sensing applications. The design is targeted for measurement of pressure up to 200 bar for nuclear reactor applications. The mechanical behavior of bulk micro-machined silicon membranes in terms of deflection and stress generation has been simulated. Based on the simulation results, optimization of the membrane design parameters in terms of length, width and thickness has been carried out. Subsequent to optimization of membrane geometrical parameters, the dimensions and location of the high stress concentration region for implantation of piezoresistors have been obtained for sensing of pressure using piezoresistive sensing technique.
Tran, Vi Do; Dario, Paolo; Mazzoleni, Stefano
2018-03-01
This review classifies the kinematic measures used to evaluate post-stroke motor impairment following upper limb robot-assisted rehabilitation and investigates their correlations with clinical outcome measures. An online literature search was carried out in PubMed, MEDLINE, Scopus and IEEE-Xplore databases. Kinematic parameters mentioned in the studies included were categorized into the International Classification of Functioning, Disability and Health (ICF) domains. The correlations between these parameters and the clinical scales were summarized. Forty-nine kinematic parameters were identified from 67 articles involving 1750 patients. The most frequently used parameters were: movement speed, movement accuracy, peak speed, number of speed peaks, and movement distance and duration. According to the ICF domains, 44 kinematic parameters were categorized into Body Functions and Structure, 5 into Activities and no parameters were categorized into Participation and Personal and Environmental Factors. Thirteen articles investigated the correlations between kinematic parameters and clinical outcome measures. Some kinematic measures showed a significant correlation coefficient with clinical scores, but most were weak or moderate. The proposed classification of kinematic measures into ICF domains and their correlations with clinical scales could contribute to identifying the most relevant ones for an integrated assessment of upper limb robot-assisted rehabilitation treatments following stroke. Increasing the assessment frequency by means of kinematic parameters could optimize clinical assessment procedures and enhance the effectiveness of rehabilitation treatments. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.
IPO: a tool for automated optimization of XCMS parameters.
Libiseller, Gunnar; Dvorzak, Michaela; Kleb, Ulrike; Gander, Edgar; Eisenberg, Tobias; Madeo, Frank; Neumann, Steffen; Trausinger, Gert; Sinner, Frank; Pieber, Thomas; Magnes, Christoph
2015-04-16
Untargeted metabolomics generates a huge amount of data. Software packages for automated data processing are crucial to successfully process these data. A variety of such software packages exist, but the outcome of data processing strongly depends on algorithm parameter settings. If they are not carefully chosen, suboptimal parameter settings can easily lead to biased results. Therefore, parameter settings also require optimization. Several parameter optimization approaches have already been proposed, but a software package for parameter optimization which is free of intricate experimental labeling steps, fast and widely applicable is still missing. We implemented the software package IPO ('Isotopologue Parameter Optimization') which is fast and free of labeling steps, and applicable to data from different kinds of samples and data from different methods of liquid chromatography - high resolution mass spectrometry and data from different instruments. IPO optimizes XCMS peak picking parameters by using natural, stable (13)C isotopic peaks to calculate a peak picking score. Retention time correction is optimized by minimizing relative retention time differences within peak groups. Grouping parameters are optimized by maximizing the number of peak groups that show one peak from each injection of a pooled sample. The different parameter settings are achieved by design of experiments, and the resulting scores are evaluated using response surface models. IPO was tested on three different data sets, each consisting of a training set and test set. IPO resulted in an increase of reliable groups (146% - 361%), a decrease of non-reliable groups (3% - 8%) and a decrease of the retention time deviation to one third. IPO was successfully applied to data derived from liquid chromatography coupled to high resolution mass spectrometry from three studies with different sample types and different chromatographic methods and devices. We were also able to show the potential of IPO to increase the reliability of metabolomics data. The source code is implemented in R, tested on Linux and Windows and it is freely available for download at https://github.com/glibiseller/IPO . The training sets and test sets can be downloaded from https://health.joanneum.at/IPO .
The impact of different dose response parameters on biologically optimized IMRT in breast cancer
NASA Astrophysics Data System (ADS)
Costa Ferreira, Brigida; Mavroidis, Panayiotis; Adamus-Górka, Magdalena; Svensson, Roger; Lind, Bengt K.
2008-05-01
The full potential of biologically optimized radiation therapy can only be maximized with the prediction of individual patient radiosensitivity prior to treatment. Unfortunately, the available biological parameters, derived from clinical trials, reflect an average radiosensitivity of the examined populations. In the present study, a breast cancer patient of stage I II with positive lymph nodes was chosen in order to analyse the effect of the variation of individual radiosensitivity on the optimal dose distribution. Thus, deviations from the average biological parameters, describing tumour, heart and lung response, were introduced covering the range of patient radiosensitivity reported in the literature. Two treatment configurations of three and seven biologically optimized intensity-modulated beams were employed. The different dose distributions were analysed using biological and physical parameters such as the complication-free tumour control probability (P+), the biologically effective uniform dose (\\bar{\\bar{D}} ), dose volume histograms, mean doses, standard deviations, maximum and minimum doses. In the three-beam plan, the difference in P+ between the optimal dose distribution (when the individual patient radiosensitivity is known) and the reference dose distribution, which is optimal for the average patient biology, ranges up to 13.9% when varying the radiosensitivity of the target volume, up to 0.9% when varying the radiosensitivity of the heart and up to 1.3% when varying the radiosensitivity of the lung. Similarly, in the seven-beam plan, the differences in P+ are up to 13.1% for the target, up to 1.6% for the heart and up to 0.9% for the left lung. When the radiosensitivity of the most important tissues in breast cancer radiation therapy was simultaneously changed, the maximum gain in outcome was as high as 7.7%. The impact of the dose response uncertainties on the treatment outcome was clinically insignificant for the majority of the simulated patients. However, the jump from generalized to individualized radiation therapy may significantly increase the therapeutic window for patients with extreme radio sensitivity or radioresistance, provided that these are identified. Even for radiosensitive patients a simple treatment technique is sufficient to maximize the outcome, since no significant benefits were obtained with a more complex technique using seven intensity-modulated beams portals.
Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J
2016-01-01
A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.
NASA Technical Reports Server (NTRS)
Phillips, K.
1976-01-01
A mathematical model for job scheduling in a specified context is presented. The model uses both linear programming and combinatorial methods. While designed with a view toward optimization of scheduling of facility and plant operations at the Deep Space Communications Complex, the context is sufficiently general to be widely applicable. The general scheduling problem including options for scheduling objectives is discussed and fundamental parameters identified. Mathematical algorithms for partitioning problems germane to scheduling are presented.
Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.
2017-01-01
Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892
Mubarak, M; Shaija, A; Suchithra, T V
2016-07-01
The higher areal productivity and lipid content of microalgae and aquatic weed makes them the best alternative feedstocks for biodiesel production. Hence, an efficient and economic method of extracting lipid or oil from aquatic weed, Salvinia molesta is an important step towards biodiesel production. Since Salvinia molesta is an unexplored feedstock, its total lipid content was first measured as 16 % using Bligh and Dyer's method which was quite sufficient for further investigation. For extracting more amount of lipid from Salvinia molesta, methanol: chloroform in the ratio 2:1 v/v was identified as the most suitable solvent system using Soxhlet apparatus. Based on the literature and the preliminary experimentations, parameters such as solvent to biomass ratio, temperature, and time were identified as significant for lipid extraction. These parameters were then optimized using response surface methodology with central composite design, where experiments were performed using twenty combinations of these extraction parameters with Minitab-17 software. A lipid yield of 92.4 % from Salvinia molesta was obtained with Soxhlet apparatus using methanol and chloroform (2:1 v/v) as solvent system, at the optimized conditions of temperature (85 °C), solvent to biomass ratio (20:1), and time (137 min), whereas a predicted lipid yield of 93.5 % with regression model. Fatty acid methyl ester (FAME) analysis of S. molesta lipid using gas chromatograph mass spectroscopy (GCMS) with flame ionization detector showed that fatty acids such as C16:0, C16:1, C18:1, and C18:2 contributed more than 9 % weight of total fatty acids. FAME consisted of 56.32, 28.08, and 15.59 % weight of monounsaturated, saturated, and polyunsaturated fatty acids, respectively. Higher cetane number and superior oxidation stability of S. molesta FAME could be attributed to its higher monounsaturated content and lower polyunsaturated content as compared to biodiesels produced from C. vulgaris, Sunflower, and Jatropha.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas Paul, V.; Saroja, S.; Albert, S.K.
This paper presents a detailed electron microscopy study on the microstructure of various regions of weldment fabricated by three welding methods namely tungsten inert gas welding, electron beam welding and laser beam welding in an indigenously developed 9Cr reduced activation ferritic/martensitic steel. Electron back scatter diffraction studies showed a random micro-texture in all the three welds. Microstructural changes during thermal exposures were studied and corroborated with hardness and optimized conditions for the post weld heat treatment have been identified for this steel. Hollomon–Jaffe parameter has been used to estimate the extent of tempering. The activation energy for the tempering processmore » has been evaluated and found to be corresponding to interstitial diffusion of carbon in ferrite matrix. The type and microchemistry of secondary phases in different regions of the weldment have been identified by analytical transmission electron microscopy. - Highlights: • Comparison of microstructural parameters in TIG, electron beam and laser welds of RAFM steel • EBSD studies to illustrate the absence of preferred orientation and identification of prior austenite grain size using phase identification map • Optimization of PWHT conditions for indigenous RAFM steel • Study of kinetics of tempering and estimation of apparent activation energy of the process.« less
Sensitivity study and parameter optimization of OCD tool for 14nm finFET process
NASA Astrophysics Data System (ADS)
Zhang, Zhensheng; Chen, Huiping; Cheng, Shiqiu; Zhan, Yunkun; Huang, Kun; Shi, Yaoming; Xu, Yiping
2016-03-01
Optical critical dimension (OCD) measurement has been widely demonstrated as an essential metrology method for monitoring advanced IC process in the technology node of 90 nm and beyond. However, the rapidly shrunk critical dimensions of the semiconductor devices and the increasing complexity of the manufacturing process bring more challenges to OCD. The measurement precision of OCD technology highly relies on the optical hardware configuration, spectral types, and inherently interactions between the incidence of light and various materials with various topological structures, therefore sensitivity analysis and parameter optimization are very critical in the OCD applications. This paper presents a method for seeking the optimum sensitive measurement configuration to enhance the metrology precision and reduce the noise impact to the greatest extent. In this work, the sensitivity of different types of spectra with a series of hardware configurations of incidence angles and azimuth angles were investigated. The optimum hardware measurement configuration and spectrum parameter can be identified. The FinFET structures in the technology node of 14 nm were constructed to validate the algorithm. This method provides guidance to estimate the measurement precision before measuring actual device features and will be beneficial for OCD hardware configuration.
Li, Ke; Chen, Peng
2011-01-01
Structural faults, such as unbalance, misalignment and looseness, etc., often occur in the shafts of rotating machinery. These faults may cause serious machine accidents and lead to great production losses. This paper proposes an intelligent method for diagnosing structural faults of rotating machinery using ant colony optimization (ACO) and relative ratio symptom parameters (RRSPs) in order to detect faults and distinguish fault types at an early stage. New symptom parameters called "relative ratio symptom parameters" are defined for reflecting the features of vibration signals measured in each state. Synthetic detection index (SDI) using statistical theory has also been defined to evaluate the applicability of the RRSPs. The SDI can be used to indicate the fitness of a RRSP for ACO. Lastly, this paper also compares the proposed method with the conventional neural networks (NN) method. Practical examples of fault diagnosis for a centrifugal fan are provided to verify the effectiveness of the proposed method. The verification results show that the structural faults often occurring in the centrifugal fan, such as unbalance, misalignment and looseness states are effectively identified by the proposed method, while these faults are difficult to detect using conventional neural networks.
Picking ChIP-seq peak detectors for analyzing chromatin modification experiments
Micsinai, Mariann; Parisi, Fabio; Strino, Francesco; Asp, Patrik; Dynlacht, Brian D.; Kluger, Yuval
2012-01-01
Numerous algorithms have been developed to analyze ChIP-Seq data. However, the complexity of analyzing diverse patterns of ChIP-Seq signals, especially for epigenetic marks, still calls for the development of new algorithms and objective comparisons of existing methods. We developed Qeseq, an algorithm to detect regions of increased ChIP read density relative to background. Qeseq employs critical novel elements, such as iterative recalibration and neighbor joining of reads to identify enriched regions of any length. To objectively assess its performance relative to other 14 ChIP-Seq peak finders, we designed a novel protocol based on Validation Discriminant Analysis (VDA) to optimally select validation sites and generated two validation datasets, which are the most comprehensive to date for algorithmic benchmarking of key epigenetic marks. In addition, we systematically explored a total of 315 diverse parameter configurations from these algorithms and found that typically optimal parameters in one dataset do not generalize to other datasets. Nevertheless, default parameters show the most stable performance, suggesting that they should be used. This study also provides a reproducible and generalizable methodology for unbiased comparative analysis of high-throughput sequencing tools that can facilitate future algorithmic development. PMID:22307239
De Filippis, Luigi Alberto Ciro; Serio, Livia Maria; Facchini, Francesco; Mummolo, Giovanni; Ludovico, Antonio Domenico
2016-11-10
A simulation model was developed for the monitoring, controlling and optimization of the Friction Stir Welding (FSW) process. This approach, using the FSW technique, allows identifying the correlation between the process parameters (input variable) and the mechanical properties (output responses) of the welded AA5754 H111 aluminum plates. The optimization of technological parameters is a basic requirement for increasing the seam quality, since it promotes a stable and defect-free process. Both the tool rotation and the travel speed, the position of the samples extracted from the weld bead and the thermal data, detected with thermographic techniques for on-line control of the joints, were varied to build the experimental plans. The quality of joints was evaluated through destructive and non-destructive tests (visual tests, macro graphic analysis, tensile tests, indentation Vickers hardness tests and t thermographic controls). The simulation model was based on the adoption of the Artificial Neural Networks (ANNs) characterized by back-propagation learning algorithm with different types of architecture, which were able to predict with good reliability the FSW process parameters for the welding of the AA5754 H111 aluminum plates in Butt-Joint configuration.
Interactive design optimization of magnetorheological-brake actuators using the Taguchi method
NASA Astrophysics Data System (ADS)
Erol, Ozan; Gurocak, Hakan
2011-10-01
This research explored an optimization method that would automate the process of designing a magnetorheological (MR)-brake but still keep the designer in the loop. MR-brakes apply resistive torque by increasing the viscosity of an MR fluid inside the brake. This electronically controllable brake can provide a very large torque-to-volume ratio, which is very desirable for an actuator. However, the design process is quite complex and time consuming due to many parameters. In this paper, we adapted the popular Taguchi method, widely used in manufacturing, to the problem of designing a complex MR-brake. Unlike other existing methods, this approach can automatically identify the dominant parameters of the design, which reduces the search space and the time it takes to find the best possible design. While automating the search for a solution, it also lets the designer see the dominant parameters and make choices to investigate only their interactions with the design output. The new method was applied for re-designing MR-brakes. It reduced the design time from a week or two down to a few minutes. Also, usability experiments indicated significantly better brake designs by novice users.
Picking ChIP-seq peak detectors for analyzing chromatin modification experiments.
Micsinai, Mariann; Parisi, Fabio; Strino, Francesco; Asp, Patrik; Dynlacht, Brian D; Kluger, Yuval
2012-05-01
Numerous algorithms have been developed to analyze ChIP-Seq data. However, the complexity of analyzing diverse patterns of ChIP-Seq signals, especially for epigenetic marks, still calls for the development of new algorithms and objective comparisons of existing methods. We developed Qeseq, an algorithm to detect regions of increased ChIP read density relative to background. Qeseq employs critical novel elements, such as iterative recalibration and neighbor joining of reads to identify enriched regions of any length. To objectively assess its performance relative to other 14 ChIP-Seq peak finders, we designed a novel protocol based on Validation Discriminant Analysis (VDA) to optimally select validation sites and generated two validation datasets, which are the most comprehensive to date for algorithmic benchmarking of key epigenetic marks. In addition, we systematically explored a total of 315 diverse parameter configurations from these algorithms and found that typically optimal parameters in one dataset do not generalize to other datasets. Nevertheless, default parameters show the most stable performance, suggesting that they should be used. This study also provides a reproducible and generalizable methodology for unbiased comparative analysis of high-throughput sequencing tools that can facilitate future algorithmic development.
de Barros, Pietro Paolo; Metello, Luis F.; Camozzato, Tatiane Sabriela Cagol; Vieira, Domingos Manuel da Silva
2015-01-01
Objective The present study is aimed at contributing to identify the most appropriate OSEM parameters to generate myocardial perfusion imaging reconstructions with the best diagnostic quality, correlating them with patients’ body mass index. Materials and Methods The present study included 28 adult patients submitted to myocardial perfusion imaging in a public hospital. The OSEM method was utilized in the images reconstruction with six different combinations of iterations and subsets numbers. The images were analyzed by nuclear cardiology specialists taking their diagnostic value into consideration and indicating the most appropriate images in terms of diagnostic quality. Results An overall scoring analysis demonstrated that the combination of four iterations and four subsets has generated the most appropriate images in terms of diagnostic quality for all the classes of body mass index; however, the role played by the combination of six iterations and four subsets is highlighted in relation to the higher body mass index classes. Conclusion The use of optimized parameters seems to play a relevant role in the generation of images with better diagnostic quality, ensuring the diagnosis and consequential appropriate and effective treatment for the patient. PMID:26543282
Quantitative analysis of the anti-noise performance of an m-sequence in an electromagnetic method
NASA Astrophysics Data System (ADS)
Yuan, Zhe; Zhang, Yiming; Zheng, Qijia
2018-02-01
An electromagnetic method with a transmitted waveform coded by an m-sequence achieved better anti-noise performance compared to the conventional manner with a square-wave. The anti-noise performance of the m-sequence varied with multiple coding parameters; hence, a quantitative analysis of the anti-noise performance for m-sequences with different coding parameters was required to optimize them. This paper proposes the concept of an identification system, with the identified Earth impulse response obtained by measuring the system output with the input of the voltage response. A quantitative analysis of the anti-noise performance of the m-sequence was achieved by analyzing the amplitude-frequency response of the corresponding identification system. The effects of the coding parameters on the anti-noise performance are summarized by numerical simulation, and their optimization is further discussed in our conclusions; the validity of the conclusions is further verified by field experiment. The quantitative analysis method proposed in this paper provides a new insight into the anti-noise mechanism of the m-sequence, and could be used to evaluate the anti-noise performance of artificial sources in other time-domain exploration methods, such as the seismic method.
Scattina, Alessandro; Mo, Fuhao; Masson, Catherine; Avalle, Massimiliano; Arnoux, Pierre Jean
2018-01-30
This work aims at investigating the influence of some front-end design parameters of a passenger vehicle on the behavior and damage occurring in the human lower limbs when impacted in an accident. The analysis is carried out by means of finite element analysis using a generic car model for the vehicle and the lower limbs model for safety (LLMS) for the purpose of pedestrian safety. Considering the pedestrian standardized impact procedure (as in the 2003/12/EC Directive), a parametric analysis, through a design of experiments plan, was performed. Various material properties, bumper thickness, position of the higher and lower bumper beams, and position of pedestrian, were made variable in order to identify how they influence the injury occurrence. The injury prediction was evaluated from the knee lateral flexion, ligament elongation, and state of stress in the bone structure. The results highlighted that the offset between the higher and lower bumper beams is the most influential parameter affecting the knee ligament response. The influence is smaller or absent considering the other responses and the other considered parameters. The stiffness characteristics of the bumper are, instead, more notable on the tibia. Even if an optimal value of the variables could not be identified trends were detected, with the potential of indicating strategies for improvement. The behavior of a vehicle front end in the impact against a pedestrian can be improved optimizing its design. The work indicates potential strategies for improvement. In this work, each parameter was changed independently one at a time; in future works, the interaction between the design parameters could be also investigated. Moreover, a similar parametric analysis can be carried out using a standard mechanical legform model in order to understand potential diversities or correlations between standard tools and human models.
Haering, Diane; Huchez, Aurore; Barbier, Franck; Holvoët, Patrice; Begon, Mickaël
2017-01-01
Introduction Teaching acrobatic skills with a minimal amount of repetition is a major challenge for coaches. Biomechanical, statistical or computer simulation tools can help them identify the most determinant factors of performance. Release parameters, change in moment of inertia and segmental momentum transfers were identified in the prediction of acrobatics success. The purpose of the present study was to evaluate the relative contribution of these parameters in performance throughout expertise or optimisation based improvements. The counter movement forward in flight (CMFIF) was chosen for its intrinsic dichotomy between the accessibility of its attempt and complexity of its mastery. Methods Three repetitions of the CMFIF performed by eight novice and eight advanced female gymnasts were recorded using a motion capture system. Optimal aerial techniques that maximise rotation potential at regrasp were also computed. A 14-segment-multibody-model defined through the Rigid Body Dynamics Library was used to compute recorded and optimal kinematics, and biomechanical parameters. A stepwise multiple linear regression was used to determine the relative contribution of these parameters in novice recorded, novice optimised, advanced recorded and advanced optimised trials. Finally, fixed effects of expertise and optimisation were tested through a mixed-effects analysis. Results and discussion Variation in release state only contributed to performances in novice recorded trials. Moment of inertia contribution to performance increased from novice recorded, to novice optimised, advanced recorded, and advanced optimised trials. Contribution to performance of momentum transfer to the trunk during the flight prevailed in all recorded trials. Although optimisation decreased transfer contribution, momentum transfer to the arms appeared. Conclusion Findings suggest that novices should be coached on both contact and aerial technique. Inversely, mainly improved aerial technique helped advanced gymnasts increase their performance. For both, reduction of the moment of inertia should be focused on. The method proposed in this article could be generalized to any aerial skill learning investigation. PMID:28422954
Identification of handwriting by using the genetic algorithm (GA) and support vector machine (SVM)
NASA Astrophysics Data System (ADS)
Zhang, Qigui; Deng, Kai
2016-12-01
As portable digital camera and a camera phone comes more and more popular, and equally pressing is meeting the requirements of people to shoot at any time, to identify and storage handwritten character. In this paper, genetic algorithm(GA) and support vector machine(SVM)are used for identification of handwriting. Compare with parameters-optimized method, this technique overcomes two defects: first, it's easy to trap in the local optimum; second, finding the best parameters in the larger range will affects the efficiency of classification and prediction. As the experimental results suggest, GA-SVM has a higher recognition rate.
Task-driven imaging in cone-beam computed tomography.
Gang, G J; Stayman, J W; Ouadah, S; Ehtiati, T; Siewerdsen, J H
Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered backprojection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.
Optimal Design of Material and Process Parameters in Powder Injection Molding
NASA Astrophysics Data System (ADS)
Ayad, G.; Barriere, T.; Gelin, J. C.; Song, J.; Liu, B.
2007-04-01
The paper is concerned with optimization and parametric identification for the different stages in Powder Injection Molding process that consists first in injection of powder mixture with polymer binder and then to the sintering of the resulting powders part by solid state diffusion. In the first part, one describes an original methodology to optimize the process and geometry parameters in injection stage based on the combination of design of experiments and an adaptive Response Surface Modeling. Then the second part of the paper describes the identification strategy that one proposes for the sintering stage, using the identification of sintering parameters from dilatometeric curves followed by the optimization of the sintering process. The proposed approaches are applied to the optimization of material and process parameters for manufacturing a ceramic femoral implant. One demonstrates that the proposed approach give satisfactory results.
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2016-01-01
Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.
Kinematic Optimization of Robot Trajectories for Thermal Spray Coating Application
NASA Astrophysics Data System (ADS)
Deng, Sihao; Liang, Hong; Cai, Zhenhua; Liao, Hanlin; Montavon, Ghislain
2014-12-01
Industrial robots are widely used in the field of thermal spray nowadays. Due to their characteristics of high-accuracy and programmable flexibility, spraying on complex geometrical workpieces can be realized in the equipped spray room. However, in some cases, the robots cannot guarantee the process parameters defined by the robot movement, such as the scanning trajectory, spray angle, relative speed between the torch and the substrate, etc., which have distinct influences on heat and mass transfer during the generation of any thermally sprayed coatings. In this study, an investigation on the robot kinematics was proposed to find the rules of motion in a common case. The results showed that the motion behavior of each axis of robot permits to identify the motion problems in the trajectory. This approach allows to optimize the robot trajectory generation in a limited working envelop. It also minimizes the influence of robot performance to achieve a more constant relative scanning speed which is represented as a key parameter in thermal spraying.
Monitoring Of Air Quality Parameters For Construction Of Fire Risk Detection Systems
NASA Astrophysics Data System (ADS)
Romancov, I. I.; Dashkovky, A. G.; Panin, V. F.; Melkov, D. N.
2017-01-01
The analysis of fire developmental process is given, which showed that there are seven stages of fire development, a set of phenomena (factors, signs) of fire risk condition, characterized by a set of defined parameters, corresponds to each stage. Observed that the registration of high staging factors (high ambient temperature, content of CO2, etc.) means the registration of actual low staging fire (thermal destruction of materials gases, fumes, etc.) - fire risk situation. It is shown that the decrease of registered factor staging leads to construction of fire preventive and diagnostic systems as the lower is registered stage, the more uncertain is connection between the fact of its detection and a fire. It is indicated that with development of electronic equipment the staging of fire situations factors used for detection is reducing in whole, and also it is noted that for each control object it is necessary to choose (identify) the optimal factor, in particular, in many ways the optimal factor for aircrafts are smokes and their TV image.
Mohan, S K; Viruthagiri, T; Arunkumar, C
2014-04-01
Production of tannase by Aspergillus flavus (MTCC 3783) using tamarind seed powder as substrate was studied in submerged fermentation. Plackett-Burman design was applied for the screening of 12 medium nutrients. From the results, the significant nutrients were identified as tannic acid, magnesium sulfate, ferrous sulfate and ammonium sulfate. Further the optimization of process parameters was carried out using response surface methodology (RSM). RSM has been applied for designing of experiments to evaluate the interactive effects through a full 31 factorial design. The optimum conditions were tannic acid concentration, 3.22 %; fermentation period, 96 h; temperature, 35.1 °C; and pH 5.4. Higher value of the regression coefficient (R 2 = 0.9638) indicates excellent evaluation of experimental data by second-order polynomial regression model. The RSM revealed that a maximum tannase production of 139.3 U/ml was obtained at the optimum conditions.
Andrade, Thalles A; Errico, Massimiliano; Christensen, Knud V
2017-11-01
The identification of the influence of the reaction parameters is of paramount importance when defining a process design. In this work, non-edible castor oil was reacted with methanol to produce a possible component for biodiesel blends, using liquid enzymes as the catalyst. Temperature, alcohol-to-oil molar ratio, enzyme and added water contents were the reaction parameters evaluated in the transesterification reactions. The optimal conditions, giving the optimal final FAME yield and FFA content in the methyl ester-phase was identified. At 35°C, 6.0 methanol-to-oil molar ratio, 5wt% of enzyme and 5wt% of water contents, 94% of FAME yield and 6.1% of FFA in the final composition were obtained. The investigation was completed with the analysis of the component profiles, showing that at least 8h are necessary to reach a satisfactory FAME yield together with a minor FFA content. Copyright © 2017 Elsevier Ltd. All rights reserved.
BEARKIMPE-2: A VBA Excel program for characterizing granular iron in treatability studies
NASA Astrophysics Data System (ADS)
Firdous, R.; Devlin, J. F.
2014-02-01
The selection of a suitable kinetic model to investigate the reaction rate of a contaminant with granular iron (GI) is essential to optimize the permeable reactive barrier (PRB) performance in terms of its reactivity. The newly developed Kinetic Iron Model (KIM) determines the surface rate constant (k) and sorption parameters (Cmax &J) which were not possible to uniquely identify previously. The code was written in Visual Basic (VBA), within Microsoft Excel, was adapted from earlier command line FORTRAN codes, BEARPE and KIMPE. The program is organized with several user interface screens (UserForms) that guide the user step by step through the analysis. BEARKIMPE-2 uses a non-linear optimization algorithm to calculate transport and chemical kinetic parameters. Both reactive and non-reactive sites are considered. A demonstration of the functionality of BEARKIMPE-2, with three nitroaromatic compounds showed that the differences in reaction rates for these compounds could be attributed to differences in their sorption behavior rather than their propensities to accept electrons in the reduction process.
Modeling and simulation of the debonding process of composite solid propellants
NASA Astrophysics Data System (ADS)
Feng, Tao; Xu, Jin-sheng; Han, Long; Chen, Xiong
2017-07-01
In order to study the damage evolution law of composite solid propellants, the molecular dynamics particle filled algorithm was used to establish the mesoscopic structure model of HTPB(Hydroxyl-terminated polybutadiene) propellants. The cohesive element method was employed for the adhesion interface between AP(Ammonium perchlorate) particle and HTPB matrix and the bilinear cohesive zone model was used to describe the mechanical response of the interface elements. The inversion analysis method based on Hooke-Jeeves optimization algorithm was employed to identify the parameters of cohesive zone model(CZM) of the particle/binder interface. Then, the optimized parameters were applied to the commercial finite element software ABAQUS to simulate the damage evolution process for AP particle and HTPB matrix, including the initiation, development, gathering and macroscopic crack. Finally, the stress-strain simulation curve was compared with the experiment curves. The result shows that the bilinear cohesive zone model can accurately describe the debonding and fracture process between the AP particles and HTPB matrix under the uniaxial tension loading.
NASA Astrophysics Data System (ADS)
Safuan, N. S.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.
2017-09-01
In injection moulding process, the defects will always encountered and affected the final product shape and functionality. This study is concerning on minimizing warpage and optimizing the process parameter of injection moulding part. Apart from eliminating product wastes, this project also giving out best recommended parameters setting. This research studied on five parameters. The optimization showed that warpage have been improved 42.64% from 0.6524 mm to 0.30879 mm in Autodesk Moldflow Insight (AMI) simulation result and Genetic Algorithm (GA) respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lynch, Vickie E.; Borreguero, Jose M.; Bhowmik, Debsindhu
Graphical abstract: - Highlights: • An automated workflow to optimize force-field parameters. • Used the workflow to optimize force-field parameter for a system containing nanodiamond and tRNA. • The mechanism relies on molecular dynamics simulation and neutron scattering experimental data. • The workflow can be generalized to any other experimental and simulation techniques. - Abstract: Large-scale simulations and data analysis are often required to explain neutron scattering experiments to establish a connection between the fundamental physics at the nanoscale and data probed by neutrons. However, to perform simulations at experimental conditions it is critical to use correct force-field (FF) parametersmore » which are unfortunately not available for most complex experimental systems. In this work, we have developed a workflow optimization technique to provide optimized FF parameters by comparing molecular dynamics (MD) to neutron scattering data. We describe the workflow in detail by using an example system consisting of tRNA and hydrophilic nanodiamonds in a deuterated water (D{sub 2}O) environment. Quasi-elastic neutron scattering (QENS) data show a faster motion of the tRNA in the presence of nanodiamond than without the ND. To compare the QENS and MD results quantitatively, a proper choice of FF parameters is necessary. We use an efficient workflow to optimize the FF parameters between the hydrophilic nanodiamond and water by comparing to the QENS data. Our results show that we can obtain accurate FF parameters by using this technique. The workflow can be generalized to other types of neutron data for FF optimization, such as vibrational spectroscopy and spin echo.« less
Goold, Hugh Douglas; Nguyen, Hoa Mai; Kong, Fantao; Beyly-Adriano, Audrey; Légeret, Bertrand; Billon, Emmanuelle; Cuiné, Stéphan; Beisson, Fred; Peltier, Gilles; Li-Beisson, Yonghua
2016-01-01
Microalgae have emerged as a promising source for biofuel production. Massive oil and starch accumulation in microalgae is possible, but occurs mostly when biomass growth is impaired. The molecular networks underlying the negative correlation between growth and reserve formation are not known. Thus isolation of strains capable of accumulating carbon reserves during optimal growth would be highly desirable. To this end, we screened an insertional mutant library of Chlamydomonas reinhardtii for alterations in oil content. A mutant accumulating five times more oil and twice more starch than wild-type during optimal growth was isolated and named constitutive oil accumulator 1 (coa1). Growth in photobioreactors under highly controlled conditions revealed that the increase in oil and starch content in coa1 was dependent on light intensity. Genetic analysis and DNA hybridization pointed to a single insertional event responsible for the phenotype. Whole genome re-sequencing identified in coa1 a >200 kb deletion on chromosome 14 containing 41 genes. This study demonstrates that, 1), the generation of algal strains accumulating higher reserve amount without compromising biomass accumulation is feasible; 2), light is an important parameter in phenotypic analysis; and 3), a chromosomal region (Quantitative Trait Locus) acts as suppressor of carbon reserve accumulation during optimal growth. PMID:27141848
Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo
2017-08-01
This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.
NASA Astrophysics Data System (ADS)
Alligné, S.; Decaix, J.; Müller, A.; Nicolet, C.; Avellan, F.; Münch, C.
2017-04-01
Due to the massive penetration of alternative renewable energies, hydropower is a key energy conversion technology for stabilizing the electrical power network by using hydraulic machines at off design operating conditions. At full load, the axisymmetric cavitation vortex rope developing in Francis turbines acts as an internal source of energy, leading to an instability commonly referred to as self-excited surge. 1-D models are developed to predict this phenomenon and to define the range of safe operating points for a hydropower plant. These models require a calibration of several parameters. The present work aims at identifying these parameters by using CFD results as objective functions for an optimization process. A 2-D Venturi and 3-D Francis turbine are considered.
NASA Astrophysics Data System (ADS)
Rüegg, Andreas; Pilgram, Sebastian; Sigrist, Manfred
2008-06-01
We investigate the low-temperature electrical and thermal transport properties in atomically precise metallic heterostructures involving strongly correlated electron systems. The model of the Mott-insulator/band-insulator superlattice was discussed in the framework of the slave-boson mean-field approximation and transport quantities were derived by use of the Boltzmann transport equation in the relaxation-time approximation. The results for the optical conductivity are in good agreement with recently published experimental data on (LaTiO3)N/(SrTiO3)M superlattices and allow us to estimate the values of key parameters of the model. Furthermore, predictions for the thermoelectric response were made and the dependence of the Seebeck coefficient on model parameters was studied in detail. The width of the Mott-insulating material was identified as the most relevant parameter, in particular, this parameter provides a way to optimize the thermoelectric power factor at low temperatures.
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2014-10-28
Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.
A meta-analysis of the mechanical properties of ice-templated ceramics and metals
Deville, Sylvain; Meille, Sylvain; Seuba, Jordi
2015-01-01
Ice templating, also known as freeze casting, is a popular shaping route for macroporous materials. Over the past 15 years, it has been widely applied to various classes of materials, and in particular ceramics. Many formulation and process parameters, often interdependent, affect the outcome. It is thus difficult to understand the various relationships between these parameters from isolated studies where only a few of these parameters have been investigated. We report here the results of a meta analysis of the structural and mechanical properties of ice templated materials from an exhaustive collection of records. We use these results to identify which parameters are the most critical to control the structure and properties, and to derive guidelines for optimizing the mechanical response of ice templated materials. We hope these results will be a helpful guide to anyone interested in such materials. PMID:27877817
A meta-analysis of the mechanical properties of ice-templated ceramics and metals
NASA Astrophysics Data System (ADS)
Deville, Sylvain; Meille, Sylvain; Seuba, Jordi
2015-08-01
Ice templating, also known as freeze casting, is a popular shaping route for macroporous materials. Over the past 15 years, it has been widely applied to various classes of materials, and in particular ceramics. Many formulation and process parameters, often interdependent, affect the outcome. It is thus difficult to understand the various relationships between these parameters from isolated studies where only a few of these parameters have been investigated. We report here the results of a meta analysis of the structural and mechanical properties of ice templated materials from an exhaustive collection of records. We use these results to identify which parameters are the most critical to control the structure and properties, and to derive guidelines for optimizing the mechanical response of ice templated materials. We hope these results will be a helpful guide to anyone interested in such materials.
Sheet metals characterization using the virtual fields method
NASA Astrophysics Data System (ADS)
Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice
2018-05-01
In this work, a characterisation method involving a deep-notched specimen subjected to a tensile loading is introduced. This specimen leads to heterogeneous states of stress and strain, the latter being measured using a stereo DIC system (MatchID). This heterogeneity enables the identification of multiple material parameters in a single test. In order to identify material parameters from the DIC data, an inverse method called the Virtual Fields Method is employed. The method combined with recently developed sensitivity-based virtual fields allows to optimally locate areas in the test where information about each material parameter is encoded, improving accuracy of the identification over the traditional user-defined virtual fields. It is shown that a single test performed at 45° to the rolling direction is sufficient to obtain all anisotropic plastic parameters, thus reducing experimental effort involved in characterisation. The paper presents the methodology and some numerical validation.
Epstein, F H; Mugler, J P; Brookeman, J R
1994-02-01
A number of pulse sequence techniques, including magnetization-prepared gradient echo (MP-GRE), segmented GRE, and hybrid RARE, employ a relatively large number of variable pulse sequence parameters and acquire the image data during a transient signal evolution. These sequences have recently been proposed and/or used for clinical applications in the brain, spine, liver, and coronary arteries. Thus, the need for a method of deriving optimal pulse sequence parameter values for this class of sequences now exists. Due to the complexity of these sequences, conventional optimization approaches, such as applying differential calculus to signal difference equations, are inadequate. We have developed a general framework for adapting the simulated annealing algorithm to pulse sequence parameter value optimization, and applied this framework to the specific case of optimizing the white matter-gray matter signal difference for a T1-weighted variable flip angle 3D MP-RAGE sequence. Using our algorithm, the values of 35 sequence parameters, including the magnetization-preparation RF pulse flip angle and delay time, 32 flip angles in the variable flip angle gradient-echo acquisition sequence, and the magnetization recovery time, were derived. Optimized 3D MP-RAGE achieved up to a 130% increase in white matter-gray matter signal difference compared with optimized 3D RF-spoiled FLASH with the same total acquisition time. The simulated annealing approach was effective at deriving optimal parameter values for a specific 3D MP-RAGE imaging objective, and may be useful for other imaging objectives and sequences in this general class.
NASA Astrophysics Data System (ADS)
Reyes, J. J.; Adam, J. C.; Tague, C.
2016-12-01
Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.
Optimizing the fine lock performance of the Hubble Space Telescope fine guidance sensors
NASA Technical Reports Server (NTRS)
Eaton, David J.; Whittlesey, Richard; Abramowicz-Reed, Linda; Zarba, Robert
1993-01-01
This paper summarizes the on-orbit performance to date of the three Hubble Space Telescope Fine Guidance Sensors (FGS's) in Fine Lock mode, with respect to acquisition success rate, ability to maintain lock, and star brightness range. The process of optimizing Fine Lock performance, including the reasoning underlying the adjustment of uplink parameters, and the effects of optimization are described. The Fine Lock optimization process has combined theoretical and experimental approaches. Computer models of the FGS have improved understanding of the effects of uplink parameters and fine error averaging on the ability of the FGS to acquire stars and maintain lock. Empirical data have determined the variation of the interferometric error characteristics (so-called 's-curves') between FGS's and over each FGS field of view, identified binary stars, and quantified the systematic error in Coarse Track (the mode preceding Fine Lock). On the basis of these empirical data, the values of the uplink parameters can be selected more precisely. Since launch, optimization efforts have improved FGS Fine Lock performance, particularly acquisition, which now enjoys a nearly 100 percent success rate. More recent work has been directed towards improving FGS tolerance of two conditions that exceed its original design requirements. First, large amplitude spacecraft jitter is induced by solar panel vibrations following day/night transitions. This jitter is generally much greater than the FGS's were designed to track, and while the tracking ability of the FGS's has been shown to exceed design requirements, losses of Fine Lock after day/night transitions are frequent. Computer simulations have demonstrated a potential improvement in Fine Lock tracking of vehicle jitter near terminator crossings. Second, telescope spherical aberration degrades the interferometric error signal in Fine Lock, but use of the FGS two-thirds aperture stop restores the transfer function with a corresponding loss of throughput. This loss requires the minimum brightness of acquired stars to be about one magnitude brighter than originally planned.
Design of experiments to optimize an in vitro cast to predict human nasal drug deposition.
Shah, Samir A; Dickens, Colin J; Ward, David J; Banaszek, Anna A; George, Chris; Horodnik, Walter
2014-02-01
Previous studies showed nasal spray in vitro tests cannot predict in vivo deposition, pharmacokinetics, or pharmacodynamics. This challenge makes it difficult to assess deposition achieved with new technologies delivering to the therapeutically beneficial posterior nasal cavity. In this study, we determined best parameters for using a regionally divided nasal cast to predict deposition. Our study used a model suspension and a design of experiments to produce repeatable deposition results that mimic nasal deposition patterns of nasal suspensions from the literature. The seven-section (the nozzle locator, nasal vestibule, front turbinate, rear turbinate, olfactory region, nasopharynx, and throat filter) nylon nasal cast was based on computed tomography images of healthy humans. It was coated with a glycerol/Brij-35 solution to mimic mucus. After assembling and orienting, airflow was applied and nasal spray containing a model suspension was sprayed. After disassembling the cast, drug depositing in each section was assayed by HPLC. The success criteria for optimal settings were based on nine in vivo studies in the literature. The design of experiments included exploratory and half factorial screening experiments to identify variables affecting deposition (angles, airflow, and airflow time), optimization experiments, and then repeatability and reproducibility experiments. We found tilt angle and airflow time after actuation affected deposition the most. The optimized settings were flow rate of 16 L/min, postactuation flow time of 12 sec, a tilt angle of 23°, nozzle angles of 0°, and actuation speed of 5 cm/sec. Neither cast nor operator caused significant variation of results. We determined cast parameters to produce results resembling suspension nasal sprays in the literature. The results were repeatable and unaffected by operator or cast. These nasal spray parameters could be used to assess deposition from new devices or formulations. For human deposition studies using radiolabeled formulations, this cast could show that radiolabel deposition represents drug deposition. Our methods could also be used to optimize settings for other casts.
Acoustical characterization and parameter optimization of polymeric noise control materials
NASA Astrophysics Data System (ADS)
Homsi, Emile N.
2003-10-01
The sound transmission loss (STL) characteristics of polymer-based materials are considered. Analytical models that predict, characterize and optimize the STL of polymeric materials, with respect to physical parameters that affect performance, are developed for single layer panel configuration and adapted for layered panel construction with homogenous core. An optimum set of material parameters is selected and translated into practical applications for validation. Sound attenuating thermoplastic materials designed to be used as barrier systems in the automotive and consumer industries have certain acoustical characteristics that vary in function of the stiffness and density of the selected material. The validity and applicability of existing theory is explored, and since STL is influenced by factors such as the surface mass density of the panel's material, a method is modified to improve STL performance and optimize load-bearing attributes. An experimentally derived function is applied to the model for better correlation. In-phase and out-of-phase motion of top and bottom layers are considered. It was found that the layered construction of the co-injection type would exhibit fused planes at the interface and move in-phase. The model for the single layer case is adapted to the layered case where it would behave as a single panel. Primary physical parameters that affect STL are identified and manipulated. Theoretical analysis is linked to the resin's matrix attribute. High STL material with representative characteristics is evaluated versus standard resins. It was found that high STL could be achieved by altering materials' matrix and by integrating design solution in the low frequency range. A suggested numerical approach is described for STL evaluation of simple and complex geometries. In practice, validation on actual vehicle systems proved the adequacy of the acoustical characterization process.
Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation
NASA Astrophysics Data System (ADS)
Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei
2018-04-01
Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
Uncertainty Analysis of Simulated Hydraulic Fracturing
NASA Astrophysics Data System (ADS)
Chen, M.; Sun, Y.; Fu, P.; Carrigan, C. R.; Lu, Z.
2012-12-01
Artificial hydraulic fracturing is being used widely to stimulate production of oil, natural gas, and geothermal reservoirs with low natural permeability. Optimization of field design and operation is limited by the incomplete characterization of the reservoir, as well as the complexity of hydrological and geomechanical processes that control the fracturing. Thus, there are a variety of uncertainties associated with the pre-existing fracture distribution, rock mechanics, and hydraulic-fracture engineering that require evaluation of their impact on the optimized design. In this study, a multiple-stage scheme was employed to evaluate the uncertainty. We first define the ranges and distributions of 11 input parameters that characterize the natural fracture topology, in situ stress, geomechanical behavior of the rock matrix and joint interfaces, and pumping operation, to cover a wide spectrum of potential conditions expected for a natural reservoir. These parameters were then sampled 1,000 times in an 11-dimensional parameter space constrained by the specified ranges using the Latin-hypercube method. These 1,000 parameter sets were fed into the fracture simulators, and the outputs were used to construct three designed objective functions, i.e. fracture density, opened fracture length and area density. Using PSUADE, three response surfaces (11-dimensional) of the objective functions were developed and global sensitivity was analyzed to identify the most sensitive parameters for the objective functions representing fracture connectivity, which are critical for sweep efficiency of the recovery process. The second-stage high resolution response surfaces were constructed with dimension reduced to the number of the most sensitive parameters. An additional response surface with respect to the objective function of the fractal dimension for fracture distributions was constructed in this stage. Based on these response surfaces, comprehensive uncertainty analyses were conducted among input parameters and objective functions. In addition, reduced-order emulation models resulting from this analysis can be used for optimal control of hydraulic fracturing. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Kumar, Mukesh; Sigdel, A. K.; Gennett, T.; Berry, J. J.; Perkins, J. D.; Ginley, D. S.; Packard, C. E.
2013-10-01
With recent advances in flexible electronics, there is a growing need for transparent conductors with optimum conductivity tailored to the application and nearly zero residual stress to ensure mechanical reliability. Within amorphous transparent conducting oxide (TCO) systems, a variety of sputter growth parameters have been shown to separately impact film stress and optoelectronic properties due to the complex nature of the deposition process. We apply a statistical design of experiments (DOE) approach to identify growth parameter-material property relationships in amorphous indium zinc oxide (a-IZO) thin films and observed large, compressive residual stresses in films grown under conditions typically used for the deposition of highly conductive samples. Power, growth pressure, oxygen partial pressure, and RF power ratio (RF/(RF + DC)) were varied according to a full-factorial test matrix and each film was characterized. The resulting regression model and analysis of variance (ANOVA) revealed significant contributions to the residual stress from individual growth parameters as well as interactions of different growth parameters, but no conditions were found within the initial growth space that simultaneously produced low residual stress and high electrical conductivity. Extrapolation of the model results to lower oxygen partial pressures, combined with prior knowledge of conductivity-growth parameter relationships in the IZO system, allowed the selection of two promising growth conditions that were both empirically verified to achieve nearly zero residual stress and electrical conductivities >1480 S/cm. This work shows that a-IZO can be simultaneously optimized for high conductivity and low residual stress.
NASA Astrophysics Data System (ADS)
Ginghtong, Thatchanok; Nakpathomkun, Natthapon; Pechyen, Chiravoot
2018-06-01
The parameters of the plastic injection molding process have been investigated for the manufacture of a 64 oz. ultra-thin polypropylene bucket. The 3 main parameters, such as injection speed, melting temperature, holding pressure, were investigated to study their effect on the physical appearance and compressive strength. The orthogonal array of Taguchi's L9 (33) was used to carry out the experimental plan. The physical properties were measured and the compressive strength was determined using linear regression analysis. The differential scanning calorimeter (DSC) was used to analyze the crystalline structure of the product. The optimization results show that the proposed approach can help engineers identify optimal process parameters and achieve competitive advantages of energy consumption and product quality. In addition, the injection molding of the product includes 24 mm of shot stroke, 1.47 mm position transfer, 268 rpm screw speed, injection speed 100 mm/s, 172 ton clamping force, 800 kgf holding pressure, 0.9 s holding time and 1.4 s cooling time, make the products in the shape and proportion of the product satisfactory. The parameters of influence are injection speed 71.07%, melting temperature 23.31% and holding pressure 5.62%, respectively. The compressive strength of the product was able to withstand a pressure of up to 839 N before the product became plastic. The low melting temperature was caused by the superior crystalline structure of the super-ultra-thin wall product which leads to a lower compressive strength.
Autopilot for frequency-modulation atomic force microscopy.
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri
2015-10-01
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.
Autopilot for frequency-modulation atomic force microscopy
NASA Astrophysics Data System (ADS)
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri
2015-10-01
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.
Autopilot for frequency-modulation atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri, E-mail: phsivan@tx.technion.ac.il
2015-10-15
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loopsmore » require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.« less
Estimation of the discharges of the multiple water level stations by multi-objective optimization
NASA Astrophysics Data System (ADS)
Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Yanami, Hitoshi; Anai, Hirokazu; Iwami, Yoichi
2016-04-01
This presentation shows two aspects of the parameter identification to estimate the discharges of the multiple water level stations by multi-objective optimization. One is how to adjust the parameters to estimate the discharges accurately. The other is which optimization algorithms are suitable for the parameter identification. Regarding the previous studies, there is a study that minimizes the weighted error of the discharges of the multiple water level stations by single-objective optimization. On the other hand, there are some studies that minimize the multiple error assessment functions of the discharge of a single water level station by multi-objective optimization. This presentation features to simultaneously minimize the errors of the discharges of the multiple water level stations by multi-objective optimization. Abe River basin in Japan is targeted. The basin area is 567.0km2. There are thirteen rainfall stations and three water level stations. Nine flood events are investigated. They occurred from 2005 to 2012 and the maximum discharges exceed 1,000m3/s. The discharges are calculated with PWRI distributed hydrological model. The basin is partitioned into the meshes of 500m x 500m. Two-layer tanks are placed on each mesh. Fourteen parameters are adjusted to estimate the discharges accurately. Twelve of them are the hydrological parameters and two of them are the parameters of the initial water levels of the tanks. Three objective functions are the mean squared errors between the observed and calculated discharges at the water level stations. Latin Hypercube sampling is one of the uniformly sampling algorithms. The discharges are calculated with respect to the parameter values sampled by a simplified version of Latin Hypercube sampling. The observed discharge is surrounded by the calculated discharges. It suggests that it might be possible to estimate the discharge accurately by adjusting the parameters. In a sense, it is true that the discharge of a water level station can be accurately estimated by setting the parameter values optimized to the responding water level station. However, there are some cases that the calculated discharge by setting the parameter values optimized to one water level station does not meet the observed discharge at another water level station. It is important to estimate the discharges of all the water level stations in some degree of accuracy. It turns out to be possible to select the parameter values from the pareto optimal solutions by the condition that all the normalized errors by the minimum error of the responding water level station are under 3. The optimization performance of five implementations of the algorithms and a simplified version of Latin Hypercube sampling are compared. Five implementations are NSGA2 and PAES of an optimization software inspyred and MCO_NSGA2R, MOPSOCD and NSGA2R_NSGA2R of a statistical software R. NSGA2, PAES and MOPSOCD are the optimization algorithms of a genetic algorithm, an evolution strategy and a particle swarm optimization respectively. The number of the evaluations of the objective functions is 10,000. Two implementations of NSGA2 of R outperform the others. They are promising to be suitable for the parameter identification of PWRI distributed hydrological model.
NASA Astrophysics Data System (ADS)
Zhang, Chunxi; Zhang, Zuchen; Song, Jingming; Wu, Chunxiao; Song, Ningfang
2015-03-01
A splicing parameter optimization method to increase the tensile strength of splicing joint between photonic crystal fiber (PCF) and conventional fiber is demonstrated. Based on the splicing recipes provided by splicer or fiber manufacturers, the optimal values of some major splicing parameters are obtained in sequence, and a conspicuous improvement in the mechanical strength of splicing joints between PCFs and conventional fibers is validated through experiments.
Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Boyle, Richard D.
2014-01-01
Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.
Design of compact long-period gratings imprinted in optimized photonic crystal fibers
NASA Astrophysics Data System (ADS)
Seraji, F. E.; Chehreghani Anzabi, L.; Farsinezhad, S.
2009-10-01
To imprint a long-period grating (LPG) in a photonic crystal fiber (PCF) with an optimum response, first the parameters of the PCF should be optimized. In this paper, by using a semi-analytical enhanced improved vectorial effective index method, the optimized PCF parameters are determined by dividing the single-mode operation of the PCF into two regions in terms of air-hole spacing Λ ( Λ>3 μm and Λ≤3 μm). For each region appropriate expressions are suggested to evaluate the PCF parameters. By calculating the effective refractive index difference between the optimized core and cladding of the PCF under a phase-matching condition, the optimum grating period in terms of the PCF parameters is obtained.
Case study: Optimizing fault model input parameters using bio-inspired algorithms
NASA Astrophysics Data System (ADS)
Plucar, Jan; Grunt, Onřej; Zelinka, Ivan
2017-07-01
We present a case study that demonstrates a bio-inspired approach in the process of finding optimal parameters for GSM fault model. This model is constructed using Petri Nets approach it represents dynamic model of GSM network environment in the suburban areas of Ostrava city (Czech Republic). We have been faced with a task of finding optimal parameters for an application that requires high amount of data transfers between the application itself and secure servers located in datacenter. In order to find the optimal set of parameters we employ bio-inspired algorithms such as Differential Evolution (DE) or Self Organizing Migrating Algorithm (SOMA). In this paper we present use of these algorithms, compare results and judge their performance in fault probability mitigation.
NASA Astrophysics Data System (ADS)
Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun
2018-03-01
Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.
Tolrà, R P; Alonso, R; Poschenrieder, C; Barceló, D; Barceló, J
2000-08-11
Liquid chromatography-atmospheric pressure chemical ionization mass spectrometry was used to identify glucosinolates in plant extracts. Optimization of the analytical conditions and the determination of the method detection limit was performed using commercial 2-propenylglucosinolate (sinigrin). Optimal values for the following parameters were determined: nebulization pressure, gas temperature, flux of drying gas, capillar voltage, corona current and fragmentor conditions. The method detection limit for sinigrin was 2.85 ng. For validation of the method the glucosinolates in reference material (rapeseed) from the Community Bureau of Reference Materials (BCR) were analyzed. The method was applied for the determination of glucosinolates in Thlaspi caerulescens plants.
NASA Astrophysics Data System (ADS)
Iskander-Rizk, Sophinese; Wu, Min; Springeling, Geert; Mastik, Frits; Beurskens, Robert H. S. H.; van der Steen, Antonius F. W.; van Soest, Gijs
2018-02-01
Intravascular photoacoustic/ultrasound imaging (IVPA/US) can image the structure and composition of atherosclerotic lesions identifying lipid-rich plaques ex vivo and in vivo. In the literature, multiple IVPA/US catheter designs were presented and validated both in ex-vivo models and preclinical in-vivo situations. Since the catheter is a critical component of the imaging system, we discuss here a catheter design oriented to imaging plaque in a realistic and translatable setting. We present a catheter optimized for light delivery, manageable flush parameters and robustness with reduced mechanical damage risks at the laser/catheter joint interface. We also show capability of imaging within sheath and in water medium.
Chasing a Comet with a Solar Sail
NASA Technical Reports Server (NTRS)
Stough, Robert W.; Heaton, Andrew F.; Whorton, Mark S.
2008-01-01
Solar sail propulsion systems enable a wide range of missions that require constant thrust or high delta-V over long mission times. One particularly challenging mission type is a comet rendezvous mission. This paper presents optimal low-thrust trajectory designs for a range of sailcraft performance metrics and mission transit times that enables a comet rendezvous mission. These optimal trajectory results provide a trade space which can be parameterized in terms of mission duration and sailcraft performance parameters such that a design space for a small satellite comet chaser mission is identified. These results show that a feasible space exists for a small satellite to perform a comet chaser mission in a reasonable mission time.
Prakash, Jaya; Yalavarthy, Phaneendra K
2013-03-01
Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.
Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard; ...
2016-01-01
This paper proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system. The performance of the proposed approach is compared to some classic methods in later sections of the paper.« less
Odili, Julius Beneoluchi; Mohmad Kahar, Mohd Nizam; Noraziah, A
2017-01-01
In this paper, an attempt is made to apply the African Buffalo Optimization (ABO) to tune the parameters of a PID controller for an effective Automatic Voltage Regulator (AVR). Existing metaheuristic tuning methods have been proven to be quite successful but there were observable areas that need improvements especially in terms of the system's gain overshoot and steady steady state errors. Using the ABO algorithm where each buffalo location in the herd is a candidate solution to the Proportional-Integral-Derivative parameters was very helpful in addressing these two areas of concern. The encouraging results obtained from the simulation of the PID Controller parameters-tuning using the ABO when compared with the performance of Genetic Algorithm PID (GA-PID), Particle-Swarm Optimization PID (PSO-PID), Ant Colony Optimization PID (ACO-PID), PID, Bacteria-Foraging Optimization PID (BFO-PID) etc makes ABO-PID a good addition to solving PID Controller tuning problems using metaheuristics.
A quasi-dense matching approach and its calibration application with Internet photos.
Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei
2015-03-01
This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.
Performance optimization of a miniature Joule-Thomson cryocooler using numerical model
NASA Astrophysics Data System (ADS)
Ardhapurkar, P. M.; Atrey, M. D.
2014-09-01
The performance of a miniature Joule-Thomson cryocooler depends on the effectiveness of the heat exchanger. The heat exchanger used in such cryocooler is Hampson-type recuperative heat exchanger. The design of the efficient heat exchanger is crucial for the optimum performance of the cryocooler. In the present work, the heat exchanger is numerically simulated for the steady state conditions and the results are validated against the experimental data available from the literature. The area correction factor is identified for the calculation of effective heat transfer area which takes into account the effect of helical geometry. In order to get an optimum performance of the cryocoolers, operating parameters like mass flow rate, pressure and design parameters like heat exchanger length, helical diameter of coil, fin dimensions, fin density have to be identified. The present work systematically addresses this aspect of design for miniature J-T cryocooler.
Discovery of Potent, Orally Bioavailable Inhibitors of Human Cytomegalovirus
2016-01-01
A high-throughput screen based on a viral replication assay was used to identify inhibitors of the human cytomegalovirus. Using this approach, hit compound 1 was identified as a 4 μM inhibitor of HCMV that was specific and selective over other herpes viruses. Time of addition studies indicated compound 1 exerted its antiviral effect early in the viral life cycle. Mechanism of action studies also revealed that this series inhibited infection of MRC-5 and ARPE19 cells by free virus and via direct cell-to-cell spread from infected to uninfected cells. Preliminary structure–activity relationships demonstrated that the potency of compound 1 could be improved to a low nanomolar level, but metabolic stability was a key optimization parameter for this series. A strategy focused on minimizing metabolic hydrolysis of the N1-amide led to an alternative scaffold in this series with improved metabolic stability and good pharmacokinetic parameters in rat. PMID:27190604
Hau, Jean Christophe; Fontana, Patrizia; Zimmermann, Catherine; De Pover, Alain; Erdmann, Dirk; Chène, Patrick
2011-06-01
The development of new drugs with better pharmacological and safety properties mandates the optimization of several parameters. Today, potency is often used as the sole biochemical parameter to identify and select new molecules. Surprisingly, thermodynamics, which is at the core of any interaction, is rarely used in drug discovery, even though it has been suggested that the selection of scaffolds according to thermodynamic criteria may be a valuable strategy. This poor integration of thermodynamics in drug discovery might be due to difficulties in implementing calorimetry experiments despite recent technological progress in this area. In this report, the authors show that fluorescence-based thermal shift assays could be used as prescreening methods to identify compounds with different thermodynamic profiles. This approach allows a reduction in the number of compounds to be tested in calorimetry experiments, thus favoring greater integration of thermodynamics in drug discovery.
Optimization of hydraulic turbine governor parameters based on WPA
NASA Astrophysics Data System (ADS)
Gao, Chunyang; Yu, Xiangyang; Zhu, Yong; Feng, Baohao
2018-01-01
The parameters of hydraulic turbine governor directly affect the dynamic characteristics of the hydraulic unit, thus affecting the regulation capacity and the power quality of power grid. The governor of conventional hydropower unit is mainly PID governor with three adjustable parameters, which are difficult to set up. In order to optimize the hydraulic turbine governor, this paper proposes wolf pack algorithm (WPA) for intelligent tuning since the good global optimization capability of WPA. Compared with the traditional optimization method and PSO algorithm, the results show that the PID controller designed by WPA achieves a dynamic quality of hydraulic system and inhibits overshoot.
Optimal critic learning for robot control in time-varying environments.
Wang, Chen; Li, Yanan; Ge, Shuzhi Sam; Lee, Tong Heng
2015-10-01
In this paper, optimal critic learning is developed for robot control in a time-varying environment. The unknown environment is described as a linear system with time-varying parameters, and impedance control is employed for the interaction control. Desired impedance parameters are obtained in the sense of an optimal realization of the composite of trajectory tracking and force regulation. Q -function-based critic learning is developed to determine the optimal impedance parameters without the knowledge of the system dynamics. The simulation results are presented and compared with existing methods, and the efficacy of the proposed method is verified.
Longhi, Daniel Angelo; Martins, Wiaslan Figueiredo; da Silva, Nathália Buss; Carciofi, Bruno Augusto Mattar; de Aragão, Gláucia Maria Falcão; Laurindo, João Borges
2017-01-02
In predictive microbiology, the model parameters have been estimated using the sequential two-step modeling (TSM) approach, in which primary models are fitted to the microbial growth data, and then secondary models are fitted to the primary model parameters to represent their dependence with the environmental variables (e.g., temperature). The Optimal Experimental Design (OED) approach allows reducing the experimental workload and costs, and the improvement of model identifiability because primary and secondary models are fitted simultaneously from non-isothermal data. Lactobacillus viridescens was selected to this study because it is a lactic acid bacterium of great interest to meat products preservation. The objectives of this study were to estimate the growth parameters of L. viridescens in culture medium from TSM and OED approaches and to evaluate both the number of experimental data and the time needed in each approach and the confidence intervals of the model parameters. Experimental data for estimating the model parameters with TSM approach were obtained at six temperatures (total experimental time of 3540h and 196 experimental data of microbial growth). Data for OED approach were obtained from four optimal non-isothermal profiles (total experimental time of 588h and 60 experimental data of microbial growth), two profiles with increasing temperatures (IT) and two with decreasing temperatures (DT). The Baranyi and Roberts primary model and the square root secondary model were used to describe the microbial growth, in which the parameters b and T min (±95% confidence interval) were estimated from the experimental data. The parameters obtained from TSM approach were b=0.0290 (±0.0020) [1/(h 0.5 °C)] and T min =-1.33 (±1.26) [°C], with R 2 =0.986 and RMSE=0.581, and the parameters obtained with the OED approach were b=0.0316 (±0.0013) [1/(h 0.5 °C)] and T min =-0.24 (±0.55) [°C], with R 2 =0.990 and RMSE=0.436. The parameters obtained from OED approach presented smaller confidence intervals and best statistical indexes than those from TSM approach. Besides, less experimental data and time were needed to estimate the model parameters with OED than TSM. Furthermore, the OED model parameters were validated with non-isothermal experimental data with great accuracy. In this way, OED approach is feasible and is a very useful tool to improve the prediction of microbial growth under non-isothermal condition. Copyright © 2016 Elsevier B.V. All rights reserved.